Mar 10 18:47:35 crc systemd[1]: Starting Kubernetes Kubelet... Mar 10 18:47:35 crc restorecon[4741]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:35 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 10 18:47:36 crc restorecon[4741]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 10 18:47:36 crc kubenswrapper[4861]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.700887 4861 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711452 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711482 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711491 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711500 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711508 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711519 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711540 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711548 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711556 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711564 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711572 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711583 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711594 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711604 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711613 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711622 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711630 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711638 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711646 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711653 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711661 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711669 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711676 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711684 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711692 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711700 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711736 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711744 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711752 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711759 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711767 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711777 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711784 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711792 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711799 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711807 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711815 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711823 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711832 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711840 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711848 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711857 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711865 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711873 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711880 4861 feature_gate.go:330] unrecognized feature gate: Example Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711888 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711896 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711906 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711916 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711923 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711931 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.711939 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712123 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712131 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712138 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712146 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712154 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712161 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712169 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712179 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712189 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712198 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712207 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712215 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712224 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712232 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712240 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712249 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712259 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712271 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.712279 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714171 4861 flags.go:64] FLAG: --address="0.0.0.0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714194 4861 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714212 4861 flags.go:64] FLAG: --anonymous-auth="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714223 4861 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714235 4861 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714244 4861 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714256 4861 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714267 4861 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714276 4861 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714285 4861 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714295 4861 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714305 4861 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714315 4861 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714324 4861 flags.go:64] FLAG: --cgroup-root="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714333 4861 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714342 4861 flags.go:64] FLAG: --client-ca-file="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714351 4861 flags.go:64] FLAG: --cloud-config="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714360 4861 flags.go:64] FLAG: --cloud-provider="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714368 4861 flags.go:64] FLAG: --cluster-dns="[]" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714378 4861 flags.go:64] FLAG: --cluster-domain="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714387 4861 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714397 4861 flags.go:64] FLAG: --config-dir="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714406 4861 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714415 4861 flags.go:64] FLAG: --container-log-max-files="5" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714427 4861 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714435 4861 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714445 4861 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714455 4861 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714464 4861 flags.go:64] FLAG: --contention-profiling="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714473 4861 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714482 4861 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714491 4861 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714501 4861 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714512 4861 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714521 4861 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714530 4861 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714539 4861 flags.go:64] FLAG: --enable-load-reader="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714548 4861 flags.go:64] FLAG: --enable-server="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714557 4861 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714568 4861 flags.go:64] FLAG: --event-burst="100" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714578 4861 flags.go:64] FLAG: --event-qps="50" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714586 4861 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714595 4861 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714604 4861 flags.go:64] FLAG: --eviction-hard="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714615 4861 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714624 4861 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714632 4861 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714642 4861 flags.go:64] FLAG: --eviction-soft="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714651 4861 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714660 4861 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714669 4861 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714677 4861 flags.go:64] FLAG: --experimental-mounter-path="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714686 4861 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714695 4861 flags.go:64] FLAG: --fail-swap-on="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714704 4861 flags.go:64] FLAG: --feature-gates="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714740 4861 flags.go:64] FLAG: --file-check-frequency="20s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714749 4861 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714758 4861 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714767 4861 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714777 4861 flags.go:64] FLAG: --healthz-port="10248" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714786 4861 flags.go:64] FLAG: --help="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714795 4861 flags.go:64] FLAG: --hostname-override="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714804 4861 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714813 4861 flags.go:64] FLAG: --http-check-frequency="20s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714822 4861 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714831 4861 flags.go:64] FLAG: --image-credential-provider-config="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714840 4861 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714849 4861 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714859 4861 flags.go:64] FLAG: --image-service-endpoint="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714868 4861 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714877 4861 flags.go:64] FLAG: --kube-api-burst="100" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714886 4861 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714896 4861 flags.go:64] FLAG: --kube-api-qps="50" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714905 4861 flags.go:64] FLAG: --kube-reserved="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714914 4861 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714923 4861 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714932 4861 flags.go:64] FLAG: --kubelet-cgroups="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714941 4861 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714950 4861 flags.go:64] FLAG: --lock-file="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714959 4861 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714968 4861 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714977 4861 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714990 4861 flags.go:64] FLAG: --log-json-split-stream="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.714999 4861 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715008 4861 flags.go:64] FLAG: --log-text-split-stream="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715017 4861 flags.go:64] FLAG: --logging-format="text" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715026 4861 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715035 4861 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715044 4861 flags.go:64] FLAG: --manifest-url="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715053 4861 flags.go:64] FLAG: --manifest-url-header="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715065 4861 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715075 4861 flags.go:64] FLAG: --max-open-files="1000000" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715087 4861 flags.go:64] FLAG: --max-pods="110" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715096 4861 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715105 4861 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715114 4861 flags.go:64] FLAG: --memory-manager-policy="None" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715123 4861 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715132 4861 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715141 4861 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715150 4861 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715170 4861 flags.go:64] FLAG: --node-status-max-images="50" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715179 4861 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715188 4861 flags.go:64] FLAG: --oom-score-adj="-999" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715199 4861 flags.go:64] FLAG: --pod-cidr="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715209 4861 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715224 4861 flags.go:64] FLAG: --pod-manifest-path="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715233 4861 flags.go:64] FLAG: --pod-max-pids="-1" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715242 4861 flags.go:64] FLAG: --pods-per-core="0" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715251 4861 flags.go:64] FLAG: --port="10250" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715260 4861 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715269 4861 flags.go:64] FLAG: --provider-id="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715278 4861 flags.go:64] FLAG: --qos-reserved="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715287 4861 flags.go:64] FLAG: --read-only-port="10255" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715296 4861 flags.go:64] FLAG: --register-node="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715304 4861 flags.go:64] FLAG: --register-schedulable="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715314 4861 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715339 4861 flags.go:64] FLAG: --registry-burst="10" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715348 4861 flags.go:64] FLAG: --registry-qps="5" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715356 4861 flags.go:64] FLAG: --reserved-cpus="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715365 4861 flags.go:64] FLAG: --reserved-memory="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715376 4861 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715385 4861 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715394 4861 flags.go:64] FLAG: --rotate-certificates="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715405 4861 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715414 4861 flags.go:64] FLAG: --runonce="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715423 4861 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715433 4861 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715442 4861 flags.go:64] FLAG: --seccomp-default="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715451 4861 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715460 4861 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715468 4861 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715478 4861 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715487 4861 flags.go:64] FLAG: --storage-driver-password="root" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715496 4861 flags.go:64] FLAG: --storage-driver-secure="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715505 4861 flags.go:64] FLAG: --storage-driver-table="stats" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715514 4861 flags.go:64] FLAG: --storage-driver-user="root" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715523 4861 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715532 4861 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715542 4861 flags.go:64] FLAG: --system-cgroups="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715551 4861 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715565 4861 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715574 4861 flags.go:64] FLAG: --tls-cert-file="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715583 4861 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715594 4861 flags.go:64] FLAG: --tls-min-version="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715602 4861 flags.go:64] FLAG: --tls-private-key-file="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715611 4861 flags.go:64] FLAG: --topology-manager-policy="none" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715620 4861 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715629 4861 flags.go:64] FLAG: --topology-manager-scope="container" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715638 4861 flags.go:64] FLAG: --v="2" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715650 4861 flags.go:64] FLAG: --version="false" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715660 4861 flags.go:64] FLAG: --vmodule="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715672 4861 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.715685 4861 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.715966 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.715982 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.715992 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716000 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716009 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716017 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716025 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716032 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716040 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716048 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716056 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716063 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716071 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716079 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716086 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716095 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716103 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716110 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716118 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716125 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716133 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716141 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716149 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716157 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716165 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716174 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716181 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716191 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716202 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716210 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716219 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716227 4861 feature_gate.go:330] unrecognized feature gate: Example Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716235 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716243 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716251 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716258 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716266 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716274 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716284 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716293 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716301 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716310 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716317 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716325 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716332 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716340 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716348 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716356 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716363 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716371 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716379 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716386 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716394 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716403 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716410 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716418 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716426 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716434 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716441 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716450 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716458 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716466 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716474 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716481 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716489 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716523 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716534 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716544 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716553 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716563 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.716573 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.716594 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.727221 4861 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.727270 4861 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727429 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727450 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727463 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727474 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727486 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727497 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727507 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727517 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727527 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727538 4861 feature_gate.go:330] unrecognized feature gate: Example Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727548 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727558 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727568 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727578 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727589 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727599 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727610 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727620 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727630 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727640 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727650 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727659 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727670 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727681 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727691 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727702 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727751 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727761 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727771 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727780 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727790 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727800 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727809 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727820 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727832 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727846 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727862 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727875 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727886 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727895 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727907 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727917 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727928 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727938 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727948 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727959 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727968 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727978 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727988 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.727998 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728007 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728018 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728027 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728037 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728050 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728063 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728076 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728088 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728098 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728111 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728121 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728132 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728141 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728151 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728162 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728173 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728187 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728199 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728212 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728226 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728239 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.728257 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728608 4861 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728633 4861 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728650 4861 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728662 4861 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728675 4861 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728685 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728697 4861 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728744 4861 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728757 4861 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728768 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728778 4861 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728788 4861 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728801 4861 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728815 4861 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728827 4861 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728839 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728851 4861 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728864 4861 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728875 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728885 4861 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728896 4861 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728907 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728917 4861 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728927 4861 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728937 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728948 4861 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728959 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728970 4861 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728980 4861 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.728990 4861 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729001 4861 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729011 4861 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729021 4861 feature_gate.go:330] unrecognized feature gate: Example Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729031 4861 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729044 4861 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729055 4861 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729065 4861 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729075 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729086 4861 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729095 4861 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729105 4861 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729116 4861 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729126 4861 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729135 4861 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729145 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729155 4861 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729165 4861 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729175 4861 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729185 4861 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729194 4861 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729207 4861 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729218 4861 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729229 4861 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729240 4861 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729250 4861 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729260 4861 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729270 4861 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729280 4861 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729291 4861 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729301 4861 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729310 4861 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729320 4861 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729330 4861 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729340 4861 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729350 4861 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729360 4861 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729370 4861 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729380 4861 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729390 4861 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729399 4861 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.729412 4861 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.729428 4861 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.731175 4861 server.go:940] "Client rotation is on, will bootstrap in background" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.740058 4861 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.744647 4861 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.744819 4861 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.747588 4861 server.go:997] "Starting client certificate rotation" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.747627 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.747794 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.774031 4861 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.777288 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.777794 4861 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.791824 4861 log.go:25] "Validated CRI v1 runtime API" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.827634 4861 log.go:25] "Validated CRI v1 image API" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.829986 4861 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.835867 4861 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-10-18-42-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.835908 4861 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.853888 4861 manager.go:217] Machine: {Timestamp:2026-03-10 18:47:36.850677148 +0000 UTC m=+0.614113128 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b4ef8d49-23f5-4cae-bbac-08586c607b9d BootID:19532032-9073-404f-bda8-c4343aa30670 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3f:67:99 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3f:67:99 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:06:67:f6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f0:9c:b3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:14:be:dc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5f:6e:65 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:e3:76:92 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:0c:4a:84:7f:00 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:b5:c8:77:fe:83 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.854087 4861 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.854238 4861 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.856218 4861 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.856368 4861 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.856394 4861 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.856605 4861 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.856616 4861 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.857293 4861 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.857321 4861 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.857798 4861 state_mem.go:36] "Initialized new in-memory state store" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.857881 4861 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.861901 4861 kubelet.go:418] "Attempting to sync node with API server" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.861919 4861 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.861941 4861 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.861953 4861 kubelet.go:324] "Adding apiserver pod source" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.861990 4861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.867033 4861 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.868304 4861 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.870589 4861 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.870546 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.870587 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.870884 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.870816 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872121 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872144 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872150 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872157 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872167 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872190 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872197 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872209 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872217 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872224 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872234 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.872241 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.874925 4861 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.875341 4861 server.go:1280] "Started kubelet" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.875599 4861 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.875736 4861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.876555 4861 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 18:47:36 crc systemd[1]: Started Kubernetes Kubelet. Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.877288 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.878053 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.878102 4861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.878422 4861 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.878448 4861 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.878603 4861 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.878992 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.879227 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.879302 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.879628 4861 factory.go:55] Registering systemd factory Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.879648 4861 factory.go:221] Registration of the systemd container factory successfully Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880097 4861 factory.go:153] Registering CRI-O factory Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880111 4861 factory.go:221] Registration of the crio container factory successfully Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880162 4861 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880178 4861 factory.go:103] Registering Raw factory Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880194 4861 manager.go:1196] Started watching for new ooms in manager Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.880857 4861 manager.go:319] Starting recovery of all containers Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.883236 4861 server.go:460] "Adding debug handlers to kubelet server" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.888452 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.890663 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906404 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906481 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906506 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906526 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906544 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906567 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906592 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906618 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906648 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906672 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906697 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906760 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906788 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906814 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906838 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906866 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906894 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906921 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.906990 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907027 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907056 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907080 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907107 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907131 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907197 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907229 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907261 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907289 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907315 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907339 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907364 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907394 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907419 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907444 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907468 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907491 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907515 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907540 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907596 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907626 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907650 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907673 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907699 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907765 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907791 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907823 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907847 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907875 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907901 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907925 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907948 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.907973 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908009 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908035 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908065 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908096 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908121 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908146 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908168 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908195 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908223 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908249 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908274 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908302 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908327 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908356 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908381 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908407 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908430 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908453 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908477 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908500 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908524 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908548 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908574 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908603 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908631 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908659 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908762 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908804 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908831 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908857 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908884 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908911 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908937 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908962 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.908987 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909013 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909040 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909065 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909089 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909112 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909136 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909162 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909185 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909229 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909256 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909282 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909304 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909329 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909354 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909378 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909403 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909431 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909469 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909500 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909528 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909553 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909582 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909608 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909634 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909662 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909687 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909791 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909826 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909850 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909874 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909899 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909922 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909950 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.909975 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910001 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910025 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910049 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910075 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910099 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910122 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910149 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910173 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910198 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910223 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910248 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910272 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910298 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910323 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910347 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910370 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910395 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910421 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910445 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910471 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910497 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910519 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910547 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910572 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910597 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910621 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910643 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910667 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910689 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910746 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910776 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910801 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910828 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910851 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910873 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910897 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910924 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.910975 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911006 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911032 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911056 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911081 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911110 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911134 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911157 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911181 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911206 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911231 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911256 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911280 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911304 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911327 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911354 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911382 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911407 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911430 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911455 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911494 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911518 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911541 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911567 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911589 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911615 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911641 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911663 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911686 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911741 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911771 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911795 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911821 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911849 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911873 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911897 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911922 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.911947 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.913900 4861 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.913952 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.913985 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914008 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914036 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914062 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914089 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914112 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914134 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914157 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914191 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914215 4861 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914237 4861 reconstruct.go:97] "Volume reconstruction finished" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.914252 4861 reconciler.go:26] "Reconciler: start to sync state" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.916616 4861 manager.go:324] Recovery completed Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.938688 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.945030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.945092 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.945120 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.946275 4861 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.946308 4861 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.946360 4861 state_mem.go:36] "Initialized new in-memory state store" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.954910 4861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.956698 4861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.956809 4861 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.956845 4861 kubelet.go:2335] "Starting kubelet main sync loop" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.956923 4861 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 18:47:36 crc kubenswrapper[4861]: W0310 18:47:36.957679 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.957821 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.965282 4861 policy_none.go:49] "None policy: Start" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.966048 4861 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 10 18:47:36 crc kubenswrapper[4861]: I0310 18:47:36.966074 4861 state_mem.go:35] "Initializing new in-memory state store" Mar 10 18:47:36 crc kubenswrapper[4861]: E0310 18:47:36.979902 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.045258 4861 manager.go:334] "Starting Device Plugin manager" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.045320 4861 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.045336 4861 server.go:79] "Starting device plugin registration server" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.045881 4861 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.045912 4861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.046142 4861 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.046232 4861 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.046242 4861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.052392 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.057808 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.057932 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.059415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.059455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.059467 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.059625 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.059980 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.060041 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.060547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.060604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.060692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.060921 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061075 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.061986 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062271 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062414 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062865 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.062898 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.063494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.063516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.063525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.063636 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.063991 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064010 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064031 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064124 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064287 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064307 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064448 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064475 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.064597 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.065109 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.065136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.065145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.089582 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117433 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117504 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117553 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117566 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117591 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117605 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117660 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117692 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117745 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117780 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.117860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.146434 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.147785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.147839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.147859 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.147914 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.148536 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219087 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219148 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219223 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219309 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219370 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219400 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219493 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219429 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219548 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219556 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219593 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219630 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219637 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219767 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219817 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219897 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219943 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219936 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.219984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.220039 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.220060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.220085 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.349422 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.351237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.351305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.351326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.351368 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.352124 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.400402 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.425389 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.459669 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d444b44c308bd418aea853db001b5c0745fb760b37108f2cc027b2856f110717 WatchSource:0}: Error finding container d444b44c308bd418aea853db001b5c0745fb760b37108f2cc027b2856f110717: Status 404 returned error can't find the container with id d444b44c308bd418aea853db001b5c0745fb760b37108f2cc027b2856f110717 Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.462063 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.465070 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-eabafd4b262c63862ec928a5a2b6a72b871b0971da65998fca40de33309364b5 WatchSource:0}: Error finding container eabafd4b262c63862ec928a5a2b6a72b871b0971da65998fca40de33309364b5: Status 404 returned error can't find the container with id eabafd4b262c63862ec928a5a2b6a72b871b0971da65998fca40de33309364b5 Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.481937 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-34543aad9be373586ff6def6e773f3d9d742e55e1b7f4d2810e9f7e0319ec5fb WatchSource:0}: Error finding container 34543aad9be373586ff6def6e773f3d9d742e55e1b7f4d2810e9f7e0319ec5fb: Status 404 returned error can't find the container with id 34543aad9be373586ff6def6e773f3d9d742e55e1b7f4d2810e9f7e0319ec5fb Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.490219 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.499230 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.512488 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.521748 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d508fb9fe235c039062c56ce79235a20fee3f2bf5801cab6ec1f5de708f8476f WatchSource:0}: Error finding container d508fb9fe235c039062c56ce79235a20fee3f2bf5801cab6ec1f5de708f8476f: Status 404 returned error can't find the container with id d508fb9fe235c039062c56ce79235a20fee3f2bf5801cab6ec1f5de708f8476f Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.537909 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2a1030238291c8a540ec80e4eeedd3b3613d5e73ce08a713f7faca2cdd28ccb5 WatchSource:0}: Error finding container 2a1030238291c8a540ec80e4eeedd3b3613d5e73ce08a713f7faca2cdd28ccb5: Status 404 returned error can't find the container with id 2a1030238291c8a540ec80e4eeedd3b3613d5e73ce08a713f7faca2cdd28ccb5 Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.752579 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.754992 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.755064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.755090 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.755140 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.755774 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.878217 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.885243 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.885379 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.964276 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d444b44c308bd418aea853db001b5c0745fb760b37108f2cc027b2856f110717"} Mar 10 18:47:37 crc kubenswrapper[4861]: W0310 18:47:37.965080 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:37 crc kubenswrapper[4861]: E0310 18:47:37.965184 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.967316 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a1030238291c8a540ec80e4eeedd3b3613d5e73ce08a713f7faca2cdd28ccb5"} Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.968407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d508fb9fe235c039062c56ce79235a20fee3f2bf5801cab6ec1f5de708f8476f"} Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.969638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"34543aad9be373586ff6def6e773f3d9d742e55e1b7f4d2810e9f7e0319ec5fb"} Mar 10 18:47:37 crc kubenswrapper[4861]: I0310 18:47:37.970835 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"eabafd4b262c63862ec928a5a2b6a72b871b0971da65998fca40de33309364b5"} Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.253829 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.292042 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Mar 10 18:47:38 crc kubenswrapper[4861]: W0310 18:47:38.344162 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.344286 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:38 crc kubenswrapper[4861]: W0310 18:47:38.523931 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.524058 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.556519 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.558603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.558668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.558687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.558770 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.559317 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.879086 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.914634 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:47:38 crc kubenswrapper[4861]: E0310 18:47:38.916069 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.976496 4861 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2" exitCode=0 Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.976561 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.976582 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.978053 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.978072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.978080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.982216 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.982296 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.982315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.982330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.982446 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.984227 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.984256 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.984265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.985474 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc" exitCode=0 Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.985518 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.985591 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.986794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.986844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.986864 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.987946 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754" exitCode=0 Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.987989 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.988106 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.990048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.990071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.990079 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.991697 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040" exitCode=0 Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.991763 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040"} Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.991812 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.992396 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994022 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994101 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994084 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:38 crc kubenswrapper[4861]: I0310 18:47:38.994361 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:39 crc kubenswrapper[4861]: I0310 18:47:39.878420 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:39 crc kubenswrapper[4861]: E0310 18:47:39.893403 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.000562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.000622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.000642 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.000659 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.004986 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e" exitCode=0 Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.005065 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.005213 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.006551 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.006586 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.006602 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.010130 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.010245 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.011230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.011253 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.011261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.015248 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.015599 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.015878 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.015979 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.015992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01"} Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.020555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.020597 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.020609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.021422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.021438 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.021447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.160030 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.163000 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.163032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.163045 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.163087 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:40 crc kubenswrapper[4861]: E0310 18:47:40.163486 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Mar 10 18:47:40 crc kubenswrapper[4861]: W0310 18:47:40.512845 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Mar 10 18:47:40 crc kubenswrapper[4861]: E0310 18:47:40.513268 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Mar 10 18:47:40 crc kubenswrapper[4861]: I0310 18:47:40.676838 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.021174 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c749b5fb05abf706ab18e1287ab7c0564963342a458eb0fc2d4b426ff6e23b28"} Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.021397 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.022427 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.022475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.022491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026040 4861 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b" exitCode=0 Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b"} Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026270 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026339 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026508 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.026620 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.027730 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.027767 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.027777 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.027883 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028867 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.028997 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.029014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.029029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.597130 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:41 crc kubenswrapper[4861]: I0310 18:47:41.934781 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.034064 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.034105 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf"} Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.034166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75"} Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.034193 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9"} Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.034220 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035588 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035643 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035670 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:42 crc kubenswrapper[4861]: I0310 18:47:42.035843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.042786 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e"} Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.042848 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.042860 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da"} Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.042901 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044845 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.044862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.203293 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.364101 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.365753 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.365824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.365849 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.365893 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.677248 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:47:43 crc kubenswrapper[4861]: I0310 18:47:43.677362 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.034532 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.045846 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.046802 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.047308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.047374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.047396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.048527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.048581 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.048599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.551309 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.776116 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.776375 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.777361 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.778042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.778083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.778094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:44 crc kubenswrapper[4861]: I0310 18:47:44.784394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.049262 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.049386 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051091 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051238 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.051298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:45 crc kubenswrapper[4861]: I0310 18:47:45.726919 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:46 crc kubenswrapper[4861]: I0310 18:47:46.052003 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:46 crc kubenswrapper[4861]: I0310 18:47:46.053516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:46 crc kubenswrapper[4861]: I0310 18:47:46.053580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:46 crc kubenswrapper[4861]: I0310 18:47:46.053600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:47 crc kubenswrapper[4861]: E0310 18:47:47.052760 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.054055 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.055347 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.055409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.055430 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.636535 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.636843 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.638601 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.638664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:47 crc kubenswrapper[4861]: I0310 18:47:47.638684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:50 crc kubenswrapper[4861]: I0310 18:47:50.879529 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 10 18:47:50 crc kubenswrapper[4861]: W0310 18:47:50.904231 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 10 18:47:50 crc kubenswrapper[4861]: I0310 18:47:50.904382 4861 trace.go:236] Trace[2114983490]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Mar-2026 18:47:40.903) (total time: 10001ms): Mar 10 18:47:50 crc kubenswrapper[4861]: Trace[2114983490]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:47:50.904) Mar 10 18:47:50 crc kubenswrapper[4861]: Trace[2114983490]: [10.001307705s] [10.001307705s] END Mar 10 18:47:50 crc kubenswrapper[4861]: E0310 18:47:50.904422 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 10 18:47:51 crc kubenswrapper[4861]: W0310 18:47:51.304223 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.304359 4861 trace.go:236] Trace[1443753642]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Mar-2026 18:47:41.302) (total time: 10002ms): Mar 10 18:47:51 crc kubenswrapper[4861]: Trace[1443753642]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:47:51.304) Mar 10 18:47:51 crc kubenswrapper[4861]: Trace[1443753642]: [10.002119094s] [10.002119094s] END Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.304391 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.343454 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.343539 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 10 18:47:51 crc kubenswrapper[4861]: W0310 18:47:51.371185 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.371301 4861 trace.go:236] Trace[1926846748]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Mar-2026 18:47:41.370) (total time: 10001ms): Mar 10 18:47:51 crc kubenswrapper[4861]: Trace[1926846748]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:47:51.371) Mar 10 18:47:51 crc kubenswrapper[4861]: Trace[1926846748]: [10.001189122s] [10.001189122s] END Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.371333 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.598068 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.598148 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.684772 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z" node="crc" Mar 10 18:47:51 crc kubenswrapper[4861]: W0310 18:47:51.686814 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.686909 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.693065 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.693126 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.696930 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.704974 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.710152 4861 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.710215 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 10 18:47:51 crc kubenswrapper[4861]: E0310 18:47:51.730774 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 10 18:47:51 crc kubenswrapper[4861]: I0310 18:47:51.883498 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:51Z is after 2026-02-23T05:33:13Z Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.068621 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.070470 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c749b5fb05abf706ab18e1287ab7c0564963342a458eb0fc2d4b426ff6e23b28" exitCode=255 Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.070527 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c749b5fb05abf706ab18e1287ab7c0564963342a458eb0fc2d4b426ff6e23b28"} Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.070735 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.071762 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.071800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.071816 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.072376 4861 scope.go:117] "RemoveContainer" containerID="c749b5fb05abf706ab18e1287ab7c0564963342a458eb0fc2d4b426ff6e23b28" Mar 10 18:47:52 crc kubenswrapper[4861]: I0310 18:47:52.880909 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:52Z is after 2026-02-23T05:33:13Z Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.075377 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.078704 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3"} Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.078937 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.080057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.080112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.080132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.677634 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.677818 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:47:53 crc kubenswrapper[4861]: I0310 18:47:53.884034 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:53Z is after 2026-02-23T05:33:13Z Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.042408 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.088819 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.090630 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.093577 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" exitCode=255 Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.093666 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3"} Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.093791 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.093790 4861 scope.go:117] "RemoveContainer" containerID="c749b5fb05abf706ab18e1287ab7c0564963342a458eb0fc2d4b426ff6e23b28" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.095334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.095420 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.095445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.097330 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:47:54 crc kubenswrapper[4861]: E0310 18:47:54.097764 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.102833 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.787068 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.787685 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.789481 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.789667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.789841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:54 crc kubenswrapper[4861]: W0310 18:47:54.880068 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:54Z is after 2026-02-23T05:33:13Z Mar 10 18:47:54 crc kubenswrapper[4861]: E0310 18:47:54.880216 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:54 crc kubenswrapper[4861]: I0310 18:47:54.883552 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:54Z is after 2026-02-23T05:33:13Z Mar 10 18:47:54 crc kubenswrapper[4861]: W0310 18:47:54.905305 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:54Z is after 2026-02-23T05:33:13Z Mar 10 18:47:54 crc kubenswrapper[4861]: E0310 18:47:54.905553 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.100871 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.105469 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.106974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.107068 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.107088 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.108068 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:47:55 crc kubenswrapper[4861]: E0310 18:47:55.108357 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:47:55 crc kubenswrapper[4861]: I0310 18:47:55.883169 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:55Z is after 2026-02-23T05:33:13Z Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.107756 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.108882 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.108923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.108939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.109805 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:47:56 crc kubenswrapper[4861]: E0310 18:47:56.110090 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:47:56 crc kubenswrapper[4861]: I0310 18:47:56.884409 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:56Z is after 2026-02-23T05:33:13Z Mar 10 18:47:57 crc kubenswrapper[4861]: E0310 18:47:57.052897 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:47:57 crc kubenswrapper[4861]: W0310 18:47:57.064555 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:57Z is after 2026-02-23T05:33:13Z Mar 10 18:47:57 crc kubenswrapper[4861]: E0310 18:47:57.064653 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.674643 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.675850 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.678106 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.678404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.678673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.691352 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 10 18:47:57 crc kubenswrapper[4861]: I0310 18:47:57.882499 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:57Z is after 2026-02-23T05:33:13Z Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.085525 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.087229 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.087295 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.087314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.087354 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:47:58 crc kubenswrapper[4861]: E0310 18:47:58.092337 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:58Z is after 2026-02-23T05:33:13Z" node="crc" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.113655 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.115323 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.115371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.115388 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:47:58 crc kubenswrapper[4861]: E0310 18:47:58.134643 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:58Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 10 18:47:58 crc kubenswrapper[4861]: I0310 18:47:58.883742 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:58Z is after 2026-02-23T05:33:13Z Mar 10 18:47:59 crc kubenswrapper[4861]: W0310 18:47:59.469937 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:59Z is after 2026-02-23T05:33:13Z Mar 10 18:47:59 crc kubenswrapper[4861]: E0310 18:47:59.470063 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:47:59 crc kubenswrapper[4861]: I0310 18:47:59.882846 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:47:59Z is after 2026-02-23T05:33:13Z Mar 10 18:48:00 crc kubenswrapper[4861]: I0310 18:48:00.446603 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:48:00 crc kubenswrapper[4861]: E0310 18:48:00.452344 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:00 crc kubenswrapper[4861]: I0310 18:48:00.883623 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:00Z is after 2026-02-23T05:33:13Z Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.342194 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.342680 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.344329 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.344426 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.344455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.345392 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:48:01 crc kubenswrapper[4861]: E0310 18:48:01.345757 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.597487 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:01 crc kubenswrapper[4861]: E0310 18:48:01.703128 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:01 crc kubenswrapper[4861]: I0310 18:48:01.882825 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:01Z is after 2026-02-23T05:33:13Z Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.124831 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.126440 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.126502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.126524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.127664 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:48:02 crc kubenswrapper[4861]: E0310 18:48:02.128053 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:02 crc kubenswrapper[4861]: W0310 18:48:02.167812 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:02Z is after 2026-02-23T05:33:13Z Mar 10 18:48:02 crc kubenswrapper[4861]: E0310 18:48:02.167908 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:02 crc kubenswrapper[4861]: I0310 18:48:02.883145 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:02Z is after 2026-02-23T05:33:13Z Mar 10 18:48:03 crc kubenswrapper[4861]: W0310 18:48:03.676195 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z Mar 10 18:48:03 crc kubenswrapper[4861]: E0310 18:48:03.676518 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.678394 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.678539 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.678679 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.678958 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.680446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.680505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.680525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.681299 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.681571 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d" gracePeriod=30 Mar 10 18:48:03 crc kubenswrapper[4861]: I0310 18:48:03.883450 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.135145 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.135840 4861 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d" exitCode=255 Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.135932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d"} Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.136009 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e"} Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.136184 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.137645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.137797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.137827 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:04 crc kubenswrapper[4861]: I0310 18:48:04.882451 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:04Z is after 2026-02-23T05:33:13Z Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.092920 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.094604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.094660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.094678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.094754 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:05 crc kubenswrapper[4861]: E0310 18:48:05.099522 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:05Z is after 2026-02-23T05:33:13Z" node="crc" Mar 10 18:48:05 crc kubenswrapper[4861]: E0310 18:48:05.140678 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:05Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.727012 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.727228 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.728657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.728736 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.728754 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:05 crc kubenswrapper[4861]: I0310 18:48:05.883219 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:05Z is after 2026-02-23T05:33:13Z Mar 10 18:48:06 crc kubenswrapper[4861]: I0310 18:48:06.882796 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:06Z is after 2026-02-23T05:33:13Z Mar 10 18:48:07 crc kubenswrapper[4861]: E0310 18:48:07.053150 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:48:07 crc kubenswrapper[4861]: W0310 18:48:07.265136 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:07Z is after 2026-02-23T05:33:13Z Mar 10 18:48:07 crc kubenswrapper[4861]: E0310 18:48:07.265255 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:07 crc kubenswrapper[4861]: I0310 18:48:07.883154 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:07Z is after 2026-02-23T05:33:13Z Mar 10 18:48:08 crc kubenswrapper[4861]: I0310 18:48:08.882450 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:08Z is after 2026-02-23T05:33:13Z Mar 10 18:48:09 crc kubenswrapper[4861]: I0310 18:48:09.883265 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:09Z is after 2026-02-23T05:33:13Z Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.677521 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.678019 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.679693 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.679776 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.679795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:10 crc kubenswrapper[4861]: I0310 18:48:10.882487 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:10Z is after 2026-02-23T05:33:13Z Mar 10 18:48:11 crc kubenswrapper[4861]: E0310 18:48:11.709458 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:11Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:11 crc kubenswrapper[4861]: I0310 18:48:11.882390 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:11Z is after 2026-02-23T05:33:13Z Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.100226 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.102000 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.102221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.102421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.102626 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:12 crc kubenswrapper[4861]: E0310 18:48:12.107548 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:12Z is after 2026-02-23T05:33:13Z" node="crc" Mar 10 18:48:12 crc kubenswrapper[4861]: E0310 18:48:12.146583 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:12Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 10 18:48:12 crc kubenswrapper[4861]: I0310 18:48:12.884086 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:12Z is after 2026-02-23T05:33:13Z Mar 10 18:48:13 crc kubenswrapper[4861]: I0310 18:48:13.678624 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:48:13 crc kubenswrapper[4861]: I0310 18:48:13.678761 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:48:13 crc kubenswrapper[4861]: I0310 18:48:13.882423 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:13Z is after 2026-02-23T05:33:13Z Mar 10 18:48:14 crc kubenswrapper[4861]: I0310 18:48:14.882818 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:14Z is after 2026-02-23T05:33:13Z Mar 10 18:48:15 crc kubenswrapper[4861]: I0310 18:48:15.885115 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:15Z is after 2026-02-23T05:33:13Z Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.458694 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:48:16 crc kubenswrapper[4861]: E0310 18:48:16.464534 4861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:16 crc kubenswrapper[4861]: E0310 18:48:16.465758 4861 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 10 18:48:16 crc kubenswrapper[4861]: W0310 18:48:16.722075 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z Mar 10 18:48:16 crc kubenswrapper[4861]: E0310 18:48:16.722209 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.883423 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z Mar 10 18:48:16 crc kubenswrapper[4861]: W0310 18:48:16.887309 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z Mar 10 18:48:16 crc kubenswrapper[4861]: E0310 18:48:16.887444 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.957340 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.958993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.959047 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.959067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:16 crc kubenswrapper[4861]: I0310 18:48:16.959879 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:48:17 crc kubenswrapper[4861]: E0310 18:48:17.053362 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:48:17 crc kubenswrapper[4861]: I0310 18:48:17.882972 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:17Z is after 2026-02-23T05:33:13Z Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.176089 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.177999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e"} Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.178144 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.183172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.183219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.183237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:18 crc kubenswrapper[4861]: I0310 18:48:18.882867 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:18Z is after 2026-02-23T05:33:13Z Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.107898 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.109974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.110045 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.110065 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.110110 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:19 crc kubenswrapper[4861]: E0310 18:48:19.115405 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:19Z is after 2026-02-23T05:33:13Z" node="crc" Mar 10 18:48:19 crc kubenswrapper[4861]: E0310 18:48:19.152201 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:19Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.184625 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.185776 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.188324 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" exitCode=255 Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.188403 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e"} Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.188481 4861 scope.go:117] "RemoveContainer" containerID="dc83a9630dbcfd93a7dddaeeda2327ba4bea2d71b74452563b1d17340b2969b3" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.188665 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.190858 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.191055 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.191205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.192453 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:19 crc kubenswrapper[4861]: E0310 18:48:19.192997 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:19 crc kubenswrapper[4861]: I0310 18:48:19.882877 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:19Z is after 2026-02-23T05:33:13Z Mar 10 18:48:20 crc kubenswrapper[4861]: I0310 18:48:20.193530 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 10 18:48:20 crc kubenswrapper[4861]: I0310 18:48:20.881219 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:20Z is after 2026-02-23T05:33:13Z Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.342890 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.343685 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.345401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.345458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.345476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.346235 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.346502 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.598097 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.717222 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f511efb107e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,LastTimestamp:2026-03-10 18:47:36.875315326 +0000 UTC m=+0.638751286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.723637 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.729993 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.736093 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.742304 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512959c844 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.049294916 +0000 UTC m=+0.812730896,LastTimestamp:2026-03-10 18:47:37.049294916 +0000 UTC m=+0.812730896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.749802 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.059440828 +0000 UTC m=+0.822876798,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.756091 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.059462768 +0000 UTC m=+0.822898738,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.762571 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.059474059 +0000 UTC m=+0.822910029,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.768911 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.060572329 +0000 UTC m=+0.824008289,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.775218 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.06061124 +0000 UTC m=+0.824047200,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.776876 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.060729584 +0000 UTC m=+0.824165544,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.783092 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.061511802 +0000 UTC m=+0.824947762,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.789679 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.061530453 +0000 UTC m=+0.824966403,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.796274 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.061538403 +0000 UTC m=+0.824974363,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.802667 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.062003789 +0000 UTC m=+0.825439759,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.808938 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.06201917 +0000 UTC m=+0.825455140,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.815363 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.06202908 +0000 UTC m=+0.825465050,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.821767 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.062250138 +0000 UTC m=+0.825686108,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.828225 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.062264668 +0000 UTC m=+0.825700648,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.834834 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.062278939 +0000 UTC m=+0.825714919,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.841195 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.063509673 +0000 UTC m=+0.826945633,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.847620 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.063521564 +0000 UTC m=+0.826957524,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.854270 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123245fc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123245fc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945131458 +0000 UTC m=+0.708567458,LastTimestamp:2026-03-10 18:47:37.063530654 +0000 UTC m=+0.826966604,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.860972 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f5123239342\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f5123239342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945079106 +0000 UTC m=+0.708515096,LastTimestamp:2026-03-10 18:47:37.064006951 +0000 UTC m=+0.827442921,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.867909 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b8f512324172d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b8f512324172d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:36.945112877 +0000 UTC m=+0.708548867,LastTimestamp:2026-03-10 18:47:37.064118275 +0000 UTC m=+0.827554245,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: I0310 18:48:21.882539 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.882655 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f514282854e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.47139515 +0000 UTC m=+1.234831140,LastTimestamp:2026-03-10 18:47:37.47139515 +0000 UTC m=+1.234831140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.884363 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f514283ae62 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.471471202 +0000 UTC m=+1.234907182,LastTimestamp:2026-03-10 18:47:37.471471202 +0000 UTC m=+1.234907182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.889832 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51437ab4ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.48766027 +0000 UTC m=+1.251096240,LastTimestamp:2026-03-10 18:47:37.48766027 +0000 UTC m=+1.251096240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.891300 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f5145ec185c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.528645724 +0000 UTC m=+1.292081694,LastTimestamp:2026-03-10 18:47:37.528645724 +0000 UTC m=+1.292081694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.897698 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f5146d0e9ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:37.543641578 +0000 UTC m=+1.307077548,LastTimestamp:2026-03-10 18:47:37.543641578 +0000 UTC m=+1.307077548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.904160 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f51671a7100 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.0853312 +0000 UTC m=+1.848767200,LastTimestamp:2026-03-10 18:47:38.0853312 +0000 UTC m=+1.848767200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.911038 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f516732d702 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.086930178 +0000 UTC m=+1.850366138,LastTimestamp:2026-03-10 18:47:38.086930178 +0000 UTC m=+1.850366138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.917755 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51675ce270 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.089685616 +0000 UTC m=+1.853121606,LastTimestamp:2026-03-10 18:47:38.089685616 +0000 UTC m=+1.853121606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.924036 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f5167cc42c9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.096984777 +0000 UTC m=+1.860420747,LastTimestamp:2026-03-10 18:47:38.096984777 +0000 UTC m=+1.860420747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.930171 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f5167edc177 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.099179895 +0000 UTC m=+1.862615855,LastTimestamp:2026-03-10 18:47:38.099179895 +0000 UTC m=+1.862615855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.936612 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f5168097d18 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.1009974 +0000 UTC m=+1.864433370,LastTimestamp:2026-03-10 18:47:38.1009974 +0000 UTC m=+1.864433370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.943585 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51680a4844 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.101049412 +0000 UTC m=+1.864485372,LastTimestamp:2026-03-10 18:47:38.101049412 +0000 UTC m=+1.864485372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.950685 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51681b3491 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.102158481 +0000 UTC m=+1.865594481,LastTimestamp:2026-03-10 18:47:38.102158481 +0000 UTC m=+1.865594481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.957473 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f516828541a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.103018522 +0000 UTC m=+1.866454482,LastTimestamp:2026-03-10 18:47:38.103018522 +0000 UTC m=+1.866454482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.964658 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f51690cdd46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.117995846 +0000 UTC m=+1.881431806,LastTimestamp:2026-03-10 18:47:38.117995846 +0000 UTC m=+1.881431806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.971037 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51691fe8e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.119244001 +0000 UTC m=+1.882679961,LastTimestamp:2026-03-10 18:47:38.119244001 +0000 UTC m=+1.882679961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.977659 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f517a354026 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.40585527 +0000 UTC m=+2.169291240,LastTimestamp:2026-03-10 18:47:38.40585527 +0000 UTC m=+2.169291240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.985640 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f517af14987 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.418178439 +0000 UTC m=+2.181614409,LastTimestamp:2026-03-10 18:47:38.418178439 +0000 UTC m=+2.181614409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:21 crc kubenswrapper[4861]: E0310 18:48:21.993917 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f517b07039b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.419602331 +0000 UTC m=+2.183038301,LastTimestamp:2026-03-10 18:47:38.419602331 +0000 UTC m=+2.183038301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.000753 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f518a159290 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.672214672 +0000 UTC m=+2.435650672,LastTimestamp:2026-03-10 18:47:38.672214672 +0000 UTC m=+2.435650672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.007131 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f518b05556b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.687927659 +0000 UTC m=+2.451363659,LastTimestamp:2026-03-10 18:47:38.687927659 +0000 UTC m=+2.451363659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.013397 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f518b1e0ccd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.689547469 +0000 UTC m=+2.452983459,LastTimestamp:2026-03-10 18:47:38.689547469 +0000 UTC m=+2.452983459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.019702 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f5199ff31bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.939183547 +0000 UTC m=+2.702619537,LastTimestamp:2026-03-10 18:47:38.939183547 +0000 UTC m=+2.702619537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.025939 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f519af36478 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.95518732 +0000 UTC m=+2.718623310,LastTimestamp:2026-03-10 18:47:38.95518732 +0000 UTC m=+2.718623310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.032418 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f519c73c055 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.980376661 +0000 UTC m=+2.743812652,LastTimestamp:2026-03-10 18:47:38.980376661 +0000 UTC m=+2.743812652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.040051 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f519d291450 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.992260176 +0000 UTC m=+2.755696126,LastTimestamp:2026-03-10 18:47:38.992260176 +0000 UTC m=+2.755696126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.046461 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f519d2a4dc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.992340417 +0000 UTC m=+2.755776407,LastTimestamp:2026-03-10 18:47:38.992340417 +0000 UTC m=+2.755776407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.055127 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f519d7a2e33 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.997575219 +0000 UTC m=+2.761011220,LastTimestamp:2026-03-10 18:47:38.997575219 +0000 UTC m=+2.761011220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.061943 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51a951b633 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.196249651 +0000 UTC m=+2.959685601,LastTimestamp:2026-03-10 18:47:39.196249651 +0000 UTC m=+2.959685601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.068512 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51aa7e3cdb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.215944923 +0000 UTC m=+2.979380883,LastTimestamp:2026-03-10 18:47:39.215944923 +0000 UTC m=+2.979380883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.075118 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51aa8da7c7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.216955335 +0000 UTC m=+2.980391295,LastTimestamp:2026-03-10 18:47:39.216955335 +0000 UTC m=+2.980391295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: W0310 18:48:22.075242 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.075290 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.078322 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51aaa7adeb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.218660843 +0000 UTC m=+2.982096803,LastTimestamp:2026-03-10 18:47:39.218660843 +0000 UTC m=+2.982096803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.081280 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51aab8c0a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.219779752 +0000 UTC m=+2.983215702,LastTimestamp:2026-03-10 18:47:39.219779752 +0000 UTC m=+2.983215702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.085031 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f51aabf6943 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.220216131 +0000 UTC m=+2.983652091,LastTimestamp:2026-03-10 18:47:39.220216131 +0000 UTC m=+2.983652091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.090953 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51abe1dd49 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.239251273 +0000 UTC m=+3.002687233,LastTimestamp:2026-03-10 18:47:39.239251273 +0000 UTC m=+3.002687233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.092566 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b8f51ac191ad7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.242871511 +0000 UTC m=+3.006307471,LastTimestamp:2026-03-10 18:47:39.242871511 +0000 UTC m=+3.006307471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.099105 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51ac3b9e26 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.24513335 +0000 UTC m=+3.008569310,LastTimestamp:2026-03-10 18:47:39.24513335 +0000 UTC m=+3.008569310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.105469 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51ac4d26f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.246282488 +0000 UTC m=+3.009718448,LastTimestamp:2026-03-10 18:47:39.246282488 +0000 UTC m=+3.009718448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.111843 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51b6112593 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.410122131 +0000 UTC m=+3.173558091,LastTimestamp:2026-03-10 18:47:39.410122131 +0000 UTC m=+3.173558091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.118158 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51b62e6110 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.412037904 +0000 UTC m=+3.175473864,LastTimestamp:2026-03-10 18:47:39.412037904 +0000 UTC m=+3.175473864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.124505 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51b6dd01cc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.423482316 +0000 UTC m=+3.186918276,LastTimestamp:2026-03-10 18:47:39.423482316 +0000 UTC m=+3.186918276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.130924 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51b6f79da7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.425226151 +0000 UTC m=+3.188662101,LastTimestamp:2026-03-10 18:47:39.425226151 +0000 UTC m=+3.188662101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.137183 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51b71ec5eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.427792363 +0000 UTC m=+3.191228323,LastTimestamp:2026-03-10 18:47:39.427792363 +0000 UTC m=+3.191228323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.143471 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51b783e842 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.43442029 +0000 UTC m=+3.197856250,LastTimestamp:2026-03-10 18:47:39.43442029 +0000 UTC m=+3.197856250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.150125 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51c2acb0e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.621642468 +0000 UTC m=+3.385078439,LastTimestamp:2026-03-10 18:47:39.621642468 +0000 UTC m=+3.385078439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.156601 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51c2f8311a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.62659049 +0000 UTC m=+3.390026460,LastTimestamp:2026-03-10 18:47:39.62659049 +0000 UTC m=+3.390026460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.162969 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51c3e8c0d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.642355924 +0000 UTC m=+3.405791904,LastTimestamp:2026-03-10 18:47:39.642355924 +0000 UTC m=+3.405791904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.169338 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51c3fbab18 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.643595544 +0000 UTC m=+3.407031524,LastTimestamp:2026-03-10 18:47:39.643595544 +0000 UTC m=+3.407031524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.175964 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b8f51c41d9b93 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.645819795 +0000 UTC m=+3.409255765,LastTimestamp:2026-03-10 18:47:39.645819795 +0000 UTC m=+3.409255765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.182363 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51ceefc0ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.827364013 +0000 UTC m=+3.590799983,LastTimestamp:2026-03-10 18:47:39.827364013 +0000 UTC m=+3.590799983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.188815 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51cfa70dc2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.839376834 +0000 UTC m=+3.602812834,LastTimestamp:2026-03-10 18:47:39.839376834 +0000 UTC m=+3.602812834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.195273 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51cfba4314 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:39.840635668 +0000 UTC m=+3.604071668,LastTimestamp:2026-03-10 18:47:39.840635668 +0000 UTC m=+3.604071668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.201767 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51d9b7f998 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:40.008257944 +0000 UTC m=+3.771693904,LastTimestamp:2026-03-10 18:47:40.008257944 +0000 UTC m=+3.771693904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.202145 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.203302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.203351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.203372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.204158 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.204433 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.208703 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51dd6091fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:40.069638654 +0000 UTC m=+3.833074614,LastTimestamp:2026-03-10 18:47:40.069638654 +0000 UTC m=+3.833074614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.215547 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f51de5b1fb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:40.086058937 +0000 UTC m=+3.849494897,LastTimestamp:2026-03-10 18:47:40.086058937 +0000 UTC m=+3.849494897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.222617 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51e49c917f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:40.191011199 +0000 UTC m=+3.954447159,LastTimestamp:2026-03-10 18:47:40.191011199 +0000 UTC m=+3.954447159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.229288 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f51e561ed0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:40.203945231 +0000 UTC m=+3.967381191,LastTimestamp:2026-03-10 18:47:40.203945231 +0000 UTC m=+3.967381191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.236935 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5216add40a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.031003146 +0000 UTC m=+4.794439146,LastTimestamp:2026-03-10 18:47:41.031003146 +0000 UTC m=+4.794439146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.243750 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5227522e4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.310209612 +0000 UTC m=+5.073645602,LastTimestamp:2026-03-10 18:47:41.310209612 +0000 UTC m=+5.073645602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.250309 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5229f184fa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.354206458 +0000 UTC m=+5.117642448,LastTimestamp:2026-03-10 18:47:41.354206458 +0000 UTC m=+5.117642448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.256491 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f522a09f34a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.355807562 +0000 UTC m=+5.119243552,LastTimestamp:2026-03-10 18:47:41.355807562 +0000 UTC m=+5.119243552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.263048 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f52398027d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.615212501 +0000 UTC m=+5.378648491,LastTimestamp:2026-03-10 18:47:41.615212501 +0000 UTC m=+5.378648491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.270496 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f523a635338 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.63010028 +0000 UTC m=+5.393536280,LastTimestamp:2026-03-10 18:47:41.63010028 +0000 UTC m=+5.393536280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.276831 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f523a7b38a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.631666344 +0000 UTC m=+5.395102344,LastTimestamp:2026-03-10 18:47:41.631666344 +0000 UTC m=+5.395102344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.283048 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5248a990a3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.869584547 +0000 UTC m=+5.633020537,LastTimestamp:2026-03-10 18:47:41.869584547 +0000 UTC m=+5.633020537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.289694 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5249891882 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.884233858 +0000 UTC m=+5.647669858,LastTimestamp:2026-03-10 18:47:41.884233858 +0000 UTC m=+5.647669858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.296439 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5249a5826e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:41.886095982 +0000 UTC m=+5.649531972,LastTimestamp:2026-03-10 18:47:41.886095982 +0000 UTC m=+5.649531972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.302077 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f52596222bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:42.150116027 +0000 UTC m=+5.913552007,LastTimestamp:2026-03-10 18:47:42.150116027 +0000 UTC m=+5.913552007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.307401 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f525a49df9d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:42.165303197 +0000 UTC m=+5.928739177,LastTimestamp:2026-03-10 18:47:42.165303197 +0000 UTC m=+5.928739177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.315143 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f525a62636a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:42.166909802 +0000 UTC m=+5.930345772,LastTimestamp:2026-03-10 18:47:42.166909802 +0000 UTC m=+5.930345772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.321552 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f5269528997 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:42.417529239 +0000 UTC m=+6.180965199,LastTimestamp:2026-03-10 18:47:42.417529239 +0000 UTC m=+6.180965199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.328132 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b8f526a384c53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:42.432586835 +0000 UTC m=+6.196022825,LastTimestamp:2026-03-10 18:47:42.432586835 +0000 UTC m=+6.196022825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.335527 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-controller-manager-crc.189b8f52b4698a5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:43.677327965 +0000 UTC m=+7.440763965,LastTimestamp:2026-03-10 18:47:43.677327965 +0000 UTC m=+7.440763965,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.341912 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f52b46abcc7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:43.677406407 +0000 UTC m=+7.440842397,LastTimestamp:2026-03-10 18:47:43.677406407 +0000 UTC m=+7.440842397,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.353656 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-apiserver-crc.189b8f547d5a4c51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.343516753 +0000 UTC m=+15.106952743,LastTimestamp:2026-03-10 18:47:51.343516753 +0000 UTC m=+15.106952743,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.360294 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f547d5b50fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.343583485 +0000 UTC m=+15.107019485,LastTimestamp:2026-03-10 18:47:51.343583485 +0000 UTC m=+15.107019485,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.367529 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-apiserver-crc.189b8f548c874ad9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.598123737 +0000 UTC m=+15.361559707,LastTimestamp:2026-03-10 18:47:51.598123737 +0000 UTC m=+15.361559707,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.374343 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f548c881821 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.598176289 +0000 UTC m=+15.361612259,LastTimestamp:2026-03-10 18:47:51.598176289 +0000 UTC m=+15.361612259,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.382060 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-apiserver-crc.189b8f549230a6ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 10 18:48:22 crc kubenswrapper[4861]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 10 18:48:22 crc kubenswrapper[4861]: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.693108907 +0000 UTC m=+15.456544897,LastTimestamp:2026-03-10 18:47:51.693108907 +0000 UTC m=+15.456544897,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.388550 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b8f549231576e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.693154158 +0000 UTC m=+15.456590148,LastTimestamp:2026-03-10 18:47:51.693154158 +0000 UTC m=+15.456590148,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.395006 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189b8f549230a6ab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-apiserver-crc.189b8f549230a6ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 10 18:48:22 crc kubenswrapper[4861]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 10 18:48:22 crc kubenswrapper[4861]: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:51.693108907 +0000 UTC m=+15.456544897,LastTimestamp:2026-03-10 18:47:51.710195293 +0000 UTC m=+15.473631273,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.404973 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087c786b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677789291 +0000 UTC m=+17.441225281,LastTimestamp:2026-03-10 18:47:53.677789291 +0000 UTC m=+17.441225281,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.411279 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087d86bd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677858493 +0000 UTC m=+17.441294483,LastTimestamp:2026-03-10 18:47:53.677858493 +0000 UTC m=+17.441294483,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.420810 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f55087c786b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087c786b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677789291 +0000 UTC m=+17.441225281,LastTimestamp:2026-03-10 18:48:03.678511858 +0000 UTC m=+27.441947828,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.427609 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f55087d86bd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087d86bd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677858493 +0000 UTC m=+17.441294483,LastTimestamp:2026-03-10 18:48:03.678638953 +0000 UTC m=+27.442074923,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.434303 4861 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f575cc1a332 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:48:03.681542962 +0000 UTC m=+27.444978962,LastTimestamp:2026-03-10 18:48:03.681542962 +0000 UTC m=+27.444978962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.441001 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f516828541a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f516828541a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.103018522 +0000 UTC m=+1.866454482,LastTimestamp:2026-03-10 18:48:03.800842214 +0000 UTC m=+27.564278204,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.447316 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f517a354026\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f517a354026 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.40585527 +0000 UTC m=+2.169291240,LastTimestamp:2026-03-10 18:48:04.036344905 +0000 UTC m=+27.799780895,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.453930 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f517af14987\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f517af14987 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:38.418178439 +0000 UTC m=+2.181614409,LastTimestamp:2026-03-10 18:48:04.049283628 +0000 UTC m=+27.812719628,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.462856 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f55087c786b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 10 18:48:22 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087c786b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 10 18:48:22 crc kubenswrapper[4861]: body: Mar 10 18:48:22 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677789291 +0000 UTC m=+17.441225281,LastTimestamp:2026-03-10 18:48:13.678705427 +0000 UTC m=+37.442141427,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:22 crc kubenswrapper[4861]: > Mar 10 18:48:22 crc kubenswrapper[4861]: E0310 18:48:22.469577 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f55087d86bd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087d86bd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677858493 +0000 UTC m=+17.441294483,LastTimestamp:2026-03-10 18:48:13.678805131 +0000 UTC m=+37.442241131,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:48:22 crc kubenswrapper[4861]: I0310 18:48:22.885038 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:23 crc kubenswrapper[4861]: I0310 18:48:23.678292 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:48:23 crc kubenswrapper[4861]: I0310 18:48:23.678429 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:48:23 crc kubenswrapper[4861]: E0310 18:48:23.685332 4861 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b8f55087c786b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 10 18:48:23 crc kubenswrapper[4861]: &Event{ObjectMeta:{kube-controller-manager-crc.189b8f55087c786b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 10 18:48:23 crc kubenswrapper[4861]: body: Mar 10 18:48:23 crc kubenswrapper[4861]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:47:53.677789291 +0000 UTC m=+17.441225281,LastTimestamp:2026-03-10 18:48:23.678389288 +0000 UTC m=+47.441825288,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 10 18:48:23 crc kubenswrapper[4861]: > Mar 10 18:48:23 crc kubenswrapper[4861]: I0310 18:48:23.884893 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:24 crc kubenswrapper[4861]: I0310 18:48:24.880660 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:25 crc kubenswrapper[4861]: I0310 18:48:25.885445 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.115849 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.117422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.117479 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.117491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.117527 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:26 crc kubenswrapper[4861]: E0310 18:48:26.125388 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 10 18:48:26 crc kubenswrapper[4861]: E0310 18:48:26.160589 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 10 18:48:26 crc kubenswrapper[4861]: I0310 18:48:26.884942 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:27 crc kubenswrapper[4861]: E0310 18:48:27.053910 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:48:27 crc kubenswrapper[4861]: I0310 18:48:27.883658 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:28 crc kubenswrapper[4861]: W0310 18:48:28.851200 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:28 crc kubenswrapper[4861]: E0310 18:48:28.851274 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 10 18:48:28 crc kubenswrapper[4861]: I0310 18:48:28.884879 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.788169 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.788435 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.790059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.790116 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.790134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:29 crc kubenswrapper[4861]: I0310 18:48:29.884471 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:30 crc kubenswrapper[4861]: I0310 18:48:30.884792 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:31 crc kubenswrapper[4861]: I0310 18:48:31.884092 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.456332 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.456581 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.458401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.458464 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.458491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.463962 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:48:32 crc kubenswrapper[4861]: I0310 18:48:32.890078 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.125781 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.127087 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.127151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.127164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.127194 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:33 crc kubenswrapper[4861]: E0310 18:48:33.133385 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 10 18:48:33 crc kubenswrapper[4861]: E0310 18:48:33.167222 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.233749 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.236857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.236905 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.236923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:33 crc kubenswrapper[4861]: I0310 18:48:33.885103 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:34 crc kubenswrapper[4861]: I0310 18:48:34.887531 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:35 crc kubenswrapper[4861]: I0310 18:48:35.883287 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.882966 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.957665 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.958766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.958840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.958863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:36 crc kubenswrapper[4861]: I0310 18:48:36.959516 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:36 crc kubenswrapper[4861]: E0310 18:48:36.959746 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:37 crc kubenswrapper[4861]: E0310 18:48:37.054701 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:48:37 crc kubenswrapper[4861]: I0310 18:48:37.884816 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:38 crc kubenswrapper[4861]: I0310 18:48:38.881980 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:39 crc kubenswrapper[4861]: I0310 18:48:39.883598 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.134429 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.136059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.136105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.136121 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.136178 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:40 crc kubenswrapper[4861]: E0310 18:48:40.141108 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 10 18:48:40 crc kubenswrapper[4861]: E0310 18:48:40.174095 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 10 18:48:40 crc kubenswrapper[4861]: I0310 18:48:40.884563 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:41 crc kubenswrapper[4861]: I0310 18:48:41.883111 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:42 crc kubenswrapper[4861]: I0310 18:48:42.882017 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:43 crc kubenswrapper[4861]: I0310 18:48:43.879537 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:44 crc kubenswrapper[4861]: I0310 18:48:44.885242 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:45 crc kubenswrapper[4861]: I0310 18:48:45.885258 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:46 crc kubenswrapper[4861]: I0310 18:48:46.885805 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:47 crc kubenswrapper[4861]: E0310 18:48:47.055069 4861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.141900 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.143522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.143560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.143569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.143597 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:47 crc kubenswrapper[4861]: E0310 18:48:47.148569 4861 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 10 18:48:47 crc kubenswrapper[4861]: E0310 18:48:47.178843 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 10 18:48:47 crc kubenswrapper[4861]: I0310 18:48:47.882688 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:48 crc kubenswrapper[4861]: I0310 18:48:48.467099 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 10 18:48:48 crc kubenswrapper[4861]: I0310 18:48:48.490191 4861 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 10 18:48:48 crc kubenswrapper[4861]: I0310 18:48:48.884315 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.881664 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.957095 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.958558 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.958619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.958638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:49 crc kubenswrapper[4861]: I0310 18:48:49.959576 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:49 crc kubenswrapper[4861]: W0310 18:48:49.973088 4861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 10 18:48:49 crc kubenswrapper[4861]: E0310 18:48:49.973132 4861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.283358 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.285983 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557"} Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.286099 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.288457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.288511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.288531 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:50 crc kubenswrapper[4861]: I0310 18:48:50.884149 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.290798 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.291608 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.294087 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" exitCode=255 Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.294143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557"} Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.294222 4861 scope.go:117] "RemoveContainer" containerID="561ef20fa2da3317f89ee3f292d4d9d377cb20b48344389f8bd5976c93f3c33e" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.294364 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.295683 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.295809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.295837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.296938 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:48:51 crc kubenswrapper[4861]: E0310 18:48:51.297220 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.343109 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.597145 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:48:51 crc kubenswrapper[4861]: I0310 18:48:51.884682 4861 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.090308 4861 csr.go:261] certificate signing request csr-cmzgh is approved, waiting to be issued Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.102785 4861 csr.go:257] certificate signing request csr-cmzgh is issued Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.123336 4861 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.299571 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.302628 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.307147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.307227 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.307247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.308675 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:48:52 crc kubenswrapper[4861]: E0310 18:48:52.309158 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:52 crc kubenswrapper[4861]: I0310 18:48:52.748295 4861 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.104574 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-01 05:35:50.295560167 +0000 UTC Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.104626 4861 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6370h46m57.190939342s for next certificate rotation Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.304921 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.306213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.306267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.306285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.307074 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:48:53 crc kubenswrapper[4861]: E0310 18:48:53.307340 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.957557 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.959221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.959267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:53 crc kubenswrapper[4861]: I0310 18:48:53.959286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.149742 4861 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.151632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.151914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.152070 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.152333 4861 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.163585 4861 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.163942 4861 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.163992 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.168990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.169216 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.169394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.169569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.169733 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:54Z","lastTransitionTime":"2026-03-10T18:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.189984 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.194514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.194569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.194588 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.194611 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.194629 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:54Z","lastTransitionTime":"2026-03-10T18:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.210538 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.220770 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.220828 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.220846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.220905 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.220937 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:54Z","lastTransitionTime":"2026-03-10T18:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.236007 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.247530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.247582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.247598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.247619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:54 crc kubenswrapper[4861]: I0310 18:48:54.247637 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:54Z","lastTransitionTime":"2026-03-10T18:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.263097 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.263387 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.263432 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.364323 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.465124 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.565769 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.666566 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.767568 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.868198 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:54 crc kubenswrapper[4861]: E0310 18:48:54.969095 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:55 crc kubenswrapper[4861]: E0310 18:48:55.069220 4861 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.078187 4861 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.171738 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.171788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.171805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.171828 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.171845 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.275063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.275121 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.275137 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.275160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.275180 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.378110 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.378163 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.378179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.378200 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.378222 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.480812 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.480861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.480878 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.480898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.480914 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.583413 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.583772 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.583969 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.584139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.584298 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.686525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.686992 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.687163 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.687303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.687430 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.790674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.791089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.791275 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.791431 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.791595 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.894480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.894535 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.894552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.894573 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.894589 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:55Z","lastTransitionTime":"2026-03-10T18:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.911781 4861 apiserver.go:52] "Watching apiserver" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.919232 4861 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.919481 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.919978 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.920067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.920187 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.920325 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:55 crc kubenswrapper[4861]: E0310 18:48:55.920389 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:48:55 crc kubenswrapper[4861]: E0310 18:48:55.920560 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.920644 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.922297 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:55 crc kubenswrapper[4861]: E0310 18:48:55.922404 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.922889 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.923663 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.925238 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.925247 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.926071 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.926356 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.926124 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.926537 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.927047 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.969902 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.980434 4861 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 18:48:55 crc kubenswrapper[4861]: I0310 18:48:55.990192 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.000934 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.001016 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.001036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.001064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.001092 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.013918 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.029908 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.045629 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054063 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054124 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054244 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054279 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054311 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054342 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054373 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054406 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054450 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054502 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054552 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054587 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054651 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054703 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054762 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054798 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054907 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.054933 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055062 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055119 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055171 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055317 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055373 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055424 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055473 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055528 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055587 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055639 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055688 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055777 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056190 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056251 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056303 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056662 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056822 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056925 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057075 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057138 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057185 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057236 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057290 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057381 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057432 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057480 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057532 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057581 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057631 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058457 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058509 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058778 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058881 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058934 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058983 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059086 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059135 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059184 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059230 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059277 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059550 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059878 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059946 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060050 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060116 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060166 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060229 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060287 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060337 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060391 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060438 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060489 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060537 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060583 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060770 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055419 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055490 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.055836 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056021 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056120 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056432 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056601 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.056876 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062431 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062496 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057045 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057262 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057489 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058783 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.058799 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059281 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.059870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060129 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060690 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.060937 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.061677 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.061793 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062557 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062627 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.061822 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062369 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062691 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.061991 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062822 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062862 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062898 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062933 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062966 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063000 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063034 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063067 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063101 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063138 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063096 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063202 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063236 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063303 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064324 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064413 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064483 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064520 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064552 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064588 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064623 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062036 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064659 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064800 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064836 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064908 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065088 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065129 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065200 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065233 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065379 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065417 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065454 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065554 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065588 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065622 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065654 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065691 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065750 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065783 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065899 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065969 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066005 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066333 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066376 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066411 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066451 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066488 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066533 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062208 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062389 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.057033 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062523 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062583 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062795 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062917 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.062061 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063285 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063434 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063489 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063687 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063926 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.063956 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064364 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064612 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.064979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065270 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065300 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065547 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.065954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066194 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066221 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066257 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066382 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.066750 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.067154 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.067643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069101 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.068083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.068259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.068244 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.068419 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069209 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069253 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069436 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069448 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069572 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069597 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069779 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.069999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070648 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070780 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070893 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.070397 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.071664 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.071939 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.071951 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.072237 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.072401 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.072558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.073016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.073213 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:48:56.573183973 +0000 UTC m=+80.336619963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.073355 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.073383 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.074346 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.075218 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.075602 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.075958 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.075972 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076036 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076114 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076204 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076279 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076369 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076377 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076482 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076900 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076940 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076993 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.076819 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077377 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077380 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077458 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077495 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077744 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077844 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.077865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078005 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078044 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078077 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078109 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078234 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078285 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078333 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078403 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078438 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078469 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078536 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078572 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078609 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078643 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078740 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078806 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078843 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078876 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078908 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.078941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079134 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079175 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079379 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079449 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079498 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079542 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079575 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079673 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079818 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079851 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079886 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079924 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079956 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079990 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080034 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080296 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080407 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080482 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080552 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080872 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080895 4861 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080916 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080939 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080958 4861 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080977 4861 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080998 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081015 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081034 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081053 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081074 4861 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081094 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079494 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079501 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.079816 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080203 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081036 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.080978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081156 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081551 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081825 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.081833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082287 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082322 4861 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082345 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082366 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082388 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082410 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082431 4861 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082452 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082475 4861 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082496 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082517 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082538 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082556 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082574 4861 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082594 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082615 4861 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082637 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082640 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082664 4861 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082693 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082752 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082781 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082802 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082820 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082837 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082896 4861 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082916 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082934 4861 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082956 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082977 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.082994 4861 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083012 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083030 4861 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083048 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083067 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083085 4861 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083150 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083188 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083171 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083236 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083765 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083808 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083828 4861 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083854 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083931 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083971 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.083997 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084027 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084056 4861 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084086 4861 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084118 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084144 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084169 4861 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084195 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084221 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084249 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084273 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084292 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084310 4861 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084328 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084347 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084366 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084385 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084404 4861 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084412 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084422 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084487 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.084510 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.085076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.085998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.085999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.086062 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.086587 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.087356 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.087753 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.087806 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.088165 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.088416 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.089577 4861 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.091494 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091597 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091639 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091666 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091692 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.091691 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091808 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091836 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091862 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091887 4861 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091909 4861 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091933 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091958 4861 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.091981 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092002 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092026 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092049 4861 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092071 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092095 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092117 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092140 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092163 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.092201 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:56.592173776 +0000 UTC m=+80.355609776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092224 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.092249 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:56.592235657 +0000 UTC m=+80.355671647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092271 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092291 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092310 4861 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092328 4861 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092349 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092368 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092386 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092403 4861 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092422 4861 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092440 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092459 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092477 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092561 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092579 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092581 4861 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.092640 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.093071 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.093234 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.093473 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.093780 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.094537 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.094573 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.096648 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.099808 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.100307 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.101409 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.104801 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.104974 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.105695 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107220 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107262 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107321 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.107494 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.108221 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.110885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111001 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111249 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111670 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111895 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.111735 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.112181 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.112234 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.112655 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.114232 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.114258 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.114957 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.114995 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.115021 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.115163 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:56.615136525 +0000 UTC m=+80.378572525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116070 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116116 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116803 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.116951 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.118243 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.118433 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.118560 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.118802 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:56.618772866 +0000 UTC m=+80.382208876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.120430 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.120671 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.120961 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.121035 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.121366 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.121777 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.122124 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.122409 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.122562 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.126414 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.127262 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.127408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.127822 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.128840 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.129174 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.129292 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.129530 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.130301 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.130643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.130678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.131678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.132023 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.132268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.132783 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.133010 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.133190 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.134655 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.143318 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.145304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.145895 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.164949 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193438 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193574 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193577 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193589 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193635 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193650 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193663 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193677 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193689 4861 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193703 4861 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193736 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193740 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193748 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193781 4861 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193795 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193852 4861 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193864 4861 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193875 4861 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193886 4861 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193897 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193908 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193919 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193933 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193945 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193957 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193968 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193979 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193989 4861 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.193999 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194010 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194023 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194034 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194044 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194055 4861 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194066 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194080 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194115 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194127 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194138 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194149 4861 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194159 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194171 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194181 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194191 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194202 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194214 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194225 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194235 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194246 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194259 4861 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194270 4861 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194281 4861 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194292 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194304 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194325 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194338 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194349 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194360 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194371 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194381 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194392 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194405 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194416 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194427 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194438 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194448 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194460 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194476 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194488 4861 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194500 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194512 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194522 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194534 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194546 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194557 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194568 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194580 4861 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194593 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194605 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194618 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194632 4861 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194644 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194657 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.194668 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.210615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.210652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.210666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.210683 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.210697 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.245138 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.258246 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.269281 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: source /etc/kubernetes/apiserver-url.env Mar 10 18:48:56 crc kubenswrapper[4861]: else Mar 10 18:48:56 crc kubenswrapper[4861]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 10 18:48:56 crc kubenswrapper[4861]: exit 1 Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.270930 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.271004 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 10 18:48:56 crc kubenswrapper[4861]: W0310 18:48:56.279074 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-8ffe793e7ed41f21066c16d29f91789c33d29a864daf7b7f8faa74c9dd99ebd6 WatchSource:0}: Error finding container 8ffe793e7ed41f21066c16d29f91789c33d29a864daf7b7f8faa74c9dd99ebd6: Status 404 returned error can't find the container with id 8ffe793e7ed41f21066c16d29f91789c33d29a864daf7b7f8faa74c9dd99ebd6 Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.285128 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:48:56 crc kubenswrapper[4861]: set +o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 10 18:48:56 crc kubenswrapper[4861]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 10 18:48:56 crc kubenswrapper[4861]: ho_enable="--enable-hybrid-overlay" Mar 10 18:48:56 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 10 18:48:56 crc kubenswrapper[4861]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 10 18:48:56 crc kubenswrapper[4861]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-host=127.0.0.1 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-port=9743 \ Mar 10 18:48:56 crc kubenswrapper[4861]: ${ho_enable} \ Mar 10 18:48:56 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:48:56 crc kubenswrapper[4861]: --disable-approver \ Mar 10 18:48:56 crc kubenswrapper[4861]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --wait-for-kubernetes-api=200s \ Mar 10 18:48:56 crc kubenswrapper[4861]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.291652 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:48:56 crc kubenswrapper[4861]: set +o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: Mar 10 18:48:56 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --disable-webhook \ Mar 10 18:48:56 crc kubenswrapper[4861]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.292870 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 10 18:48:56 crc kubenswrapper[4861]: W0310 18:48:56.294998 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-609b7457003d9245427d9fe9c403a0bd9d05e3660d732df085c5b90b8b3f94c7 WatchSource:0}: Error finding container 609b7457003d9245427d9fe9c403a0bd9d05e3660d732df085c5b90b8b3f94c7: Status 404 returned error can't find the container with id 609b7457003d9245427d9fe9c403a0bd9d05e3660d732df085c5b90b8b3f94c7 Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.299206 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.300454 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.312618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.312671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.312689 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.312740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.312757 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.314040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"10c846defffab60dbbace8a270d5d40ee5aaa5792117d752d8c2b46910fedd02"} Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.316524 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: source /etc/kubernetes/apiserver-url.env Mar 10 18:48:56 crc kubenswrapper[4861]: else Mar 10 18:48:56 crc kubenswrapper[4861]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 10 18:48:56 crc kubenswrapper[4861]: exit 1 Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.317860 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.318997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"609b7457003d9245427d9fe9c403a0bd9d05e3660d732df085c5b90b8b3f94c7"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.320694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8ffe793e7ed41f21066c16d29f91789c33d29a864daf7b7f8faa74c9dd99ebd6"} Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.322441 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.322699 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:48:56 crc kubenswrapper[4861]: set +o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 10 18:48:56 crc kubenswrapper[4861]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 10 18:48:56 crc kubenswrapper[4861]: ho_enable="--enable-hybrid-overlay" Mar 10 18:48:56 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 10 18:48:56 crc kubenswrapper[4861]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 10 18:48:56 crc kubenswrapper[4861]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-host=127.0.0.1 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --webhook-port=9743 \ Mar 10 18:48:56 crc kubenswrapper[4861]: ${ho_enable} \ Mar 10 18:48:56 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:48:56 crc kubenswrapper[4861]: --disable-approver \ Mar 10 18:48:56 crc kubenswrapper[4861]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --wait-for-kubernetes-api=200s \ Mar 10 18:48:56 crc kubenswrapper[4861]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.344174 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.349343 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:48:56 crc kubenswrapper[4861]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:48:56 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:48:56 crc kubenswrapper[4861]: set -o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:48:56 crc kubenswrapper[4861]: set +o allexport Mar 10 18:48:56 crc kubenswrapper[4861]: fi Mar 10 18:48:56 crc kubenswrapper[4861]: Mar 10 18:48:56 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 10 18:48:56 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:48:56 crc kubenswrapper[4861]: --disable-webhook \ Mar 10 18:48:56 crc kubenswrapper[4861]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 10 18:48:56 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:48:56 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:48:56 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.349800 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.351823 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.377275 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.392217 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.404811 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.415567 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.415620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.415640 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.415665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.415685 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.417082 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.427330 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.437577 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.448653 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.457595 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.472485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.485254 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.495553 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.519419 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.519491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.519509 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.519535 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.519553 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.598430 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.598579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.598644 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.598758 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:48:57.59868708 +0000 UTC m=+81.362123081 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.598797 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.598804 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.598893 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:57.598873485 +0000 UTC m=+81.362309455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.598969 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:57.598910345 +0000 UTC m=+81.362346335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.622360 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.622435 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.622459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.622491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.622516 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.699541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.699626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699827 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699854 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699873 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699910 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699956 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:57.69993397 +0000 UTC m=+81.463369970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699962 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.699988 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: E0310 18:48:56.700071 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:57.700044531 +0000 UTC m=+81.463480531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.725244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.725330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.725342 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.725387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.725398 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.828308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.828379 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.828397 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.828423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.828444 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.931258 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.931338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.931362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.931396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.931421 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:56Z","lastTransitionTime":"2026-03-10T18:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.970149 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.971488 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.973933 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.973902 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.975215 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.977326 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.978522 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.979981 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.981916 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.983335 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.985625 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.987178 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.989423 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.990520 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.992919 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.993140 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.994848 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.995984 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.998067 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 10 18:48:56 crc kubenswrapper[4861]: I0310 18:48:56.998860 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.000000 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.002003 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.002928 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.004872 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.005787 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.006903 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.007950 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.009252 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.010861 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.013792 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.015037 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.017370 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.018532 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.020288 4861 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.020522 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.023141 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.026448 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.028995 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.030169 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034047 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034938 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034967 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.034989 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.035521 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.037514 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.038234 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.038949 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.041233 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.042252 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.044295 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.045643 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.047784 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.048762 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.051755 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.053615 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.053816 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.056575 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.058120 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.060924 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.066253 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.068021 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.069578 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.070989 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.137054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.137126 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.137149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.137179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.137202 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.239513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.239569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.239585 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.239609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.239627 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.342180 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.342226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.342243 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.342266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.342314 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.445592 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.445647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.445665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.445691 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.445736 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.548849 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.549203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.549355 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.549622 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.549829 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.608247 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.608428 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:48:59.608395637 +0000 UTC m=+83.371831637 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.608614 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.608673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.608841 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.608843 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.608896 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:59.608881715 +0000 UTC m=+83.372317705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.608928 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:59.608908135 +0000 UTC m=+83.372344125 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.652852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.652945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.652965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.652990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.653007 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.709224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.709288 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709406 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709444 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709463 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709497 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709525 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709545 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709526 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:59.709505733 +0000 UTC m=+83.472941723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.709654 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:48:59.709630305 +0000 UTC m=+83.473066295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.755516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.755560 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.755577 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.755600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.755616 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.858666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.858752 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.858772 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.858794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.858810 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.958043 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.958104 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.958271 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.958305 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.958411 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:48:57 crc kubenswrapper[4861]: E0310 18:48:57.958513 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.961594 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.961670 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.961690 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.961737 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:57 crc kubenswrapper[4861]: I0310 18:48:57.961756 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:57Z","lastTransitionTime":"2026-03-10T18:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.064969 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.064997 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.065005 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.065018 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.065029 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.168356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.168405 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.168422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.168445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.168461 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.271800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.271857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.271875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.271898 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.271915 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.374128 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.374181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.374198 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.374223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.374240 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.477416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.477473 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.477490 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.477513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.477530 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.580064 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.580118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.580137 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.580161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.580178 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.682836 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.682884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.682900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.682923 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.682940 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.786263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.786298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.786308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.786322 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.786331 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.889001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.889038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.889047 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.889060 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.889069 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.992554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.992609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.992627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.992652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:58 crc kubenswrapper[4861]: I0310 18:48:58.992670 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:58Z","lastTransitionTime":"2026-03-10T18:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.096395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.096455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.096472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.096497 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.096513 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.198777 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.198824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.198840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.198864 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.198882 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.302426 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.302487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.302506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.302528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.302545 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.405623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.405684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.405700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.405793 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.405825 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.509463 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.509519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.509536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.509557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.509573 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.612308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.612380 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.612399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.612423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.612444 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.627804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.627982 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.628057 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:49:03.628015231 +0000 UTC m=+87.391451231 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.628119 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.628175 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.628211 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:03.628178433 +0000 UTC m=+87.391614433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.628340 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.628433 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:03.628414347 +0000 UTC m=+87.391850437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.715690 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.715773 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.715794 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.715819 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.715838 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.729197 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.729278 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.729453 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.729498 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.729517 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.729594 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:03.729569974 +0000 UTC m=+87.493005974 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.730156 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.730191 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.730207 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.730312 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:03.730293126 +0000 UTC m=+87.493729126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.818445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.818493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.818510 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.818532 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.818551 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.921384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.921441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.921458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.921501 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.921518 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:48:59Z","lastTransitionTime":"2026-03-10T18:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.957767 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.957898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:48:59 crc kubenswrapper[4861]: I0310 18:48:59.957787 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.957990 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.958157 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:48:59 crc kubenswrapper[4861]: E0310 18:48:59.958345 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.023981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.024071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.024118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.024147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.024166 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.126897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.126963 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.126981 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.127006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.127031 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.232017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.232114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.232132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.232153 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.232174 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.334852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.334944 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.334964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.334985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.335037 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.396110 4861 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.437503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.437541 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.437552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.437565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.437574 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.540416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.540487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.540516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.540540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.540557 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.643214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.643317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.643348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.643419 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.643450 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.746119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.746176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.746203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.746235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.746259 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.849759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.849825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.849845 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.849869 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.850026 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.952619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.952694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.952774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.952806 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:00 crc kubenswrapper[4861]: I0310 18:49:00.952831 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:00Z","lastTransitionTime":"2026-03-10T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.058118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.058175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.058187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.058207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.058220 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.162160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.162213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.162224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.162240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.162253 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.264976 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.265033 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.265049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.265072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.265091 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.368297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.368362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.368381 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.368409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.368427 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.470908 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.470989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.471035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.471067 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.471089 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.575104 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.575161 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.575179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.575204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.575223 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.678635 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.678693 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.678741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.678766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.678784 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.782530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.782575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.782593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.782613 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.782629 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.885861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.885945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.886007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.886041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.886066 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.957467 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.957521 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.957489 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:01 crc kubenswrapper[4861]: E0310 18:49:01.957626 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:01 crc kubenswrapper[4861]: E0310 18:49:01.957817 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:01 crc kubenswrapper[4861]: E0310 18:49:01.957950 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.989105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.989164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.989182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.989207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:01 crc kubenswrapper[4861]: I0310 18:49:01.989226 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:01Z","lastTransitionTime":"2026-03-10T18:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.091842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.091940 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.092007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.092037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.092059 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.194760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.194821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.194838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.194862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.194882 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.297273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.297348 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.297371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.297399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.297421 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.400965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.401032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.401052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.401083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.401156 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.504884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.504948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.504968 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.504991 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.505009 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.608259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.608349 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.608371 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.608401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.608424 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.711614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.711653 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.711663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.711681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.711692 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.814997 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.815076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.815094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.815118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.815140 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.918632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.918694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.918736 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.918760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:02 crc kubenswrapper[4861]: I0310 18:49:02.918778 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:02Z","lastTransitionTime":"2026-03-10T18:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.021654 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.021727 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.021740 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.021758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.021770 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.124808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.124903 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.124920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.124945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.124963 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.228205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.228524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.228542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.228565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.228583 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.331930 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.331974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.331985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.332002 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.332015 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.434557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.434638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.434660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.434688 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.434746 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.537494 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.537537 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.537547 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.537565 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.537575 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.640149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.640205 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.640222 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.640245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.640263 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.665788 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.665877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.665924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.666007 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.666017 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:49:11.665985936 +0000 UTC m=+95.429421936 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.666091 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:11.666073048 +0000 UTC m=+95.429509088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.666137 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.666258 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:11.66622568 +0000 UTC m=+95.429661740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.743528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.743619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.743641 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.743666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.743683 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.767052 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.767123 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767257 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767258 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767313 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767333 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767276 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767400 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767400 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:11.767374387 +0000 UTC m=+95.530810377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.767467 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:11.767450978 +0000 UTC m=+95.530887048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.846570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.846639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.846657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.846680 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.846698 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.950285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.950346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.950362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.950387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.950405 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:03Z","lastTransitionTime":"2026-03-10T18:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.957494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.957565 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.957746 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.957827 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.957943 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.958649 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.973849 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 10 18:49:03 crc kubenswrapper[4861]: I0310 18:49:03.974685 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:49:03 crc kubenswrapper[4861]: E0310 18:49:03.975037 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.054066 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.054159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.054181 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.054211 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.054233 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.076576 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.157637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.157702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.157747 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.157784 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.157804 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.260837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.260900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.260920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.260946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.260964 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.276193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.276260 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.276279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.276307 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.276330 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.294371 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.299094 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.299168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.299186 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.299214 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.299235 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.315110 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.320425 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.320493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.320520 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.320555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.320579 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.336937 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.341604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.341656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.341673 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.341698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.341747 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.344032 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.344282 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.357411 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.362783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.362839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.362857 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.362882 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.362900 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.377948 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:04 crc kubenswrapper[4861]: E0310 18:49:04.378201 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.380329 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.380375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.380395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.380426 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.380445 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.484358 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.484507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.484531 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.484594 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.484617 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.588027 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.588087 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.588104 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.588129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.588148 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.691101 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.691234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.691257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.691285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.691305 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.794831 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.794900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.794917 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.794941 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.794960 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.897625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.897656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.897664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.897679 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.897690 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.999259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.999289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.999297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.999309 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:04 crc kubenswrapper[4861]: I0310 18:49:04.999318 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:04Z","lastTransitionTime":"2026-03-10T18:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.101920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.101947 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.101955 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.101968 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.101976 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.204288 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.204318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.204325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.204337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.204345 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.307076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.307106 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.307115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.307127 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.307137 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.410620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.410750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.410771 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.410805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.410829 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.514627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.514691 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.514733 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.514760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.514778 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.617862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.617942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.617960 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.617985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.618005 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.720909 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.720979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.720998 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.721025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.721045 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.824506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.824598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.824623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.824650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.824669 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.928212 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.928269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.928286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.928311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.928330 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:05Z","lastTransitionTime":"2026-03-10T18:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.957645 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.957689 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:05 crc kubenswrapper[4861]: I0310 18:49:05.957819 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:05 crc kubenswrapper[4861]: E0310 18:49:05.957988 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:05 crc kubenswrapper[4861]: E0310 18:49:05.958189 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:05 crc kubenswrapper[4861]: E0310 18:49:05.958347 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.031523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.031589 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.031614 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.031637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.031654 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.134182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.134232 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.134245 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.134261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.134271 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.236817 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.236879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.236893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.236915 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.236929 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.340341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.340403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.340421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.340446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.340464 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.443777 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.443849 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.443872 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.443907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.443933 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.547567 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.547639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.547660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.547687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.547744 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.650336 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.650385 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.650401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.650423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.650442 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.753878 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.753939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.753956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.753980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.753997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.857150 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.857194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.857211 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.857235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.857252 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.960358 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.960401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.960417 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.960439 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.960456 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:06Z","lastTransitionTime":"2026-03-10T18:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:06 crc kubenswrapper[4861]: E0310 18:49:06.961668 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:06 crc kubenswrapper[4861]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:06 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:06 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:06 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:06 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:06 crc kubenswrapper[4861]: fi Mar 10 18:49:06 crc kubenswrapper[4861]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 10 18:49:06 crc kubenswrapper[4861]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 10 18:49:06 crc kubenswrapper[4861]: ho_enable="--enable-hybrid-overlay" Mar 10 18:49:06 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 10 18:49:06 crc kubenswrapper[4861]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 10 18:49:06 crc kubenswrapper[4861]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 10 18:49:06 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:06 crc kubenswrapper[4861]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 10 18:49:06 crc kubenswrapper[4861]: --webhook-host=127.0.0.1 \ Mar 10 18:49:06 crc kubenswrapper[4861]: --webhook-port=9743 \ Mar 10 18:49:06 crc kubenswrapper[4861]: ${ho_enable} \ Mar 10 18:49:06 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:49:06 crc kubenswrapper[4861]: --disable-approver \ Mar 10 18:49:06 crc kubenswrapper[4861]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 10 18:49:06 crc kubenswrapper[4861]: --wait-for-kubernetes-api=200s \ Mar 10 18:49:06 crc kubenswrapper[4861]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 10 18:49:06 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:06 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:06 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:06 crc kubenswrapper[4861]: E0310 18:49:06.967828 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:06 crc kubenswrapper[4861]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:06 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:06 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:06 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:06 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:06 crc kubenswrapper[4861]: fi Mar 10 18:49:06 crc kubenswrapper[4861]: Mar 10 18:49:06 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 10 18:49:06 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:06 crc kubenswrapper[4861]: --disable-webhook \ Mar 10 18:49:06 crc kubenswrapper[4861]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 10 18:49:06 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:06 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:06 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.969131 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:06 crc kubenswrapper[4861]: E0310 18:49:06.970124 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.984296 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:06 crc kubenswrapper[4861]: I0310 18:49:06.997367 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.012635 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.027018 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.040569 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.064393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.064436 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.064447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.064465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.064481 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.066504 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.083727 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.166769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.166833 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.166852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.166875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.166891 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.269224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.269289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.269306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.269333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.269351 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.333179 4861 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.372012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.372121 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.372138 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.372188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.372214 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.475103 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.475165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.475182 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.475204 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.475222 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.577203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.577263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.577281 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.577304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.577322 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.679139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.679197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.679215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.679235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.679251 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.781036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.781105 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.781139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.781167 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.781191 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.883499 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.883780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.883826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.883859 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.883881 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.957396 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.957645 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.957671 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:07 crc kubenswrapper[4861]: E0310 18:49:07.957834 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:07 crc kubenswrapper[4861]: E0310 18:49:07.958197 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:07 crc kubenswrapper[4861]: E0310 18:49:07.958564 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:07 crc kubenswrapper[4861]: E0310 18:49:07.960995 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:07 crc kubenswrapper[4861]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:07 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:07 crc kubenswrapper[4861]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 10 18:49:07 crc kubenswrapper[4861]: source /etc/kubernetes/apiserver-url.env Mar 10 18:49:07 crc kubenswrapper[4861]: else Mar 10 18:49:07 crc kubenswrapper[4861]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 10 18:49:07 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:07 crc kubenswrapper[4861]: fi Mar 10 18:49:07 crc kubenswrapper[4861]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 10 18:49:07 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:07 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:07 crc kubenswrapper[4861]: E0310 18:49:07.962299 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.986017 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.986092 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.986115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.986180 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:07 crc kubenswrapper[4861]: I0310 18:49:07.986206 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:07Z","lastTransitionTime":"2026-03-10T18:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.089079 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.089143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.089165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.089187 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.089204 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.191655 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.191760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.191786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.191813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.191832 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.293499 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.293552 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.293569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.293591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.293609 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.395316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.395373 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.395390 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.395417 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.395435 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.497493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.497570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.497587 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.497610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.497627 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.600421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.600461 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.600476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.600493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.600506 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.703308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.703370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.703387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.703413 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.703431 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.806448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.806507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.806524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.806548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.806564 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.909782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.909843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.909861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.909887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:08 crc kubenswrapper[4861]: I0310 18:49:08.909907 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:08Z","lastTransitionTime":"2026-03-10T18:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:08 crc kubenswrapper[4861]: E0310 18:49:08.959558 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:08 crc kubenswrapper[4861]: E0310 18:49:08.960800 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.012548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.012603 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.012620 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.012643 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.012661 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.115294 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.115340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.115349 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.115365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.115375 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.218193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.218252 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.218270 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.218294 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.218312 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.320901 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.320964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.320982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.321007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.321025 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.423615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.423666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.423678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.423693 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.423704 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.525265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.525316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.525333 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.525354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.525369 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.628237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.628298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.628315 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.628338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:09 crc kubenswrapper[4861]: I0310 18:49:09.628355 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:09Z","lastTransitionTime":"2026-03-10T18:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.247088 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.247088 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:10 crc kubenswrapper[4861]: E0310 18:49:10.247254 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.247107 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:10 crc kubenswrapper[4861]: E0310 18:49:10.247405 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:10 crc kubenswrapper[4861]: E0310 18:49:10.247500 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.249610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.249639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.249650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.249666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.249681 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.352154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.352192 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.352203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.352218 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.352229 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.454595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.454656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.454671 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.454694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.454730 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.557248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.557311 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.557334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.557363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.557385 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.660341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.660398 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.660454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.660480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.660498 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.763342 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.763402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.763423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.763446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.763463 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.866613 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.866669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.866686 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.866746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.866771 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.969471 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.969546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.969569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.969592 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:10 crc kubenswrapper[4861]: I0310 18:49:10.969610 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:10Z","lastTransitionTime":"2026-03-10T18:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.072391 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.072468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.072490 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.072514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.072530 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.174752 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.174783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.174790 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.174803 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.174811 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.277404 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.277458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.277480 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.277503 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.277520 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.380277 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.380336 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.380353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.380377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.380398 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.483189 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.483244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.483261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.483284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.483301 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.586448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.586498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.586508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.586523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.586533 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.689877 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.689951 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.689977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.690007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.690028 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.762807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.762991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.763089 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.763161 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.763205 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:49:27.763166212 +0000 UTC m=+111.526602232 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.763216 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.763308 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:27.763272434 +0000 UTC m=+111.526708444 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.763356 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:27.763333715 +0000 UTC m=+111.526769815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.792356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.792422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.792445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.792474 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.792499 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.864428 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.864516 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864681 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864735 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864701 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864755 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864781 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864804 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864845 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:27.864823077 +0000 UTC m=+111.628259077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.864881 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:27.864856718 +0000 UTC m=+111.628292728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.895312 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.895375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.895403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.895433 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.895455 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.957546 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.957598 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.957636 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.958519 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.958625 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:11 crc kubenswrapper[4861]: E0310 18:49:11.958786 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.998177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.998248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.998272 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.998301 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:11 crc kubenswrapper[4861]: I0310 18:49:11.998324 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:11Z","lastTransitionTime":"2026-03-10T18:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.101202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.101283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.101306 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.101331 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.101348 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.204118 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.204172 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.204188 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.204212 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.204230 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.307422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.307462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.307472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.307487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.307497 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.410487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.410571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.410593 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.410619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.410638 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.513554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.513619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.513638 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.513663 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.513682 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.616369 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.616426 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.616445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.616468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.616486 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.719557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.719604 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.719642 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.719664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.719682 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.822808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.822870 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.822886 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.822912 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.822930 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.925669 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.925752 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.925770 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.925792 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:12 crc kubenswrapper[4861]: I0310 18:49:12.925809 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:12Z","lastTransitionTime":"2026-03-10T18:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.028627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.028677 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.028694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.028741 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.028760 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.130695 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.130784 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.130802 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.130825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.130843 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.233166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.233222 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.233240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.233266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.233288 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.335387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.335451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.335468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.335492 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.335511 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.437174 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.437226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.437244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.437269 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.437288 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.540197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.540276 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.540299 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.540334 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.540354 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.643152 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.643196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.643206 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.643225 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.643240 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.745612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.745664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.745681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.745702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.745746 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.848428 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.848472 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.848485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.848502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.848513 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.950924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.950990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.951008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.951038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.951058 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:13Z","lastTransitionTime":"2026-03-10T18:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.957387 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.957419 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:13 crc kubenswrapper[4861]: E0310 18:49:13.957525 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:13 crc kubenswrapper[4861]: I0310 18:49:13.957579 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:13 crc kubenswrapper[4861]: E0310 18:49:13.957766 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:13 crc kubenswrapper[4861]: E0310 18:49:13.957863 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.053493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.053526 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.053540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.053557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.053570 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.155630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.155700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.155757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.155791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.155809 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.258542 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.258608 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.258629 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.258658 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.258704 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.360319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.360345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.360353 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.360365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.360374 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.463399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.463471 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.463495 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.463525 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.463549 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.567583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.567651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.567668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.567692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.567736 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.670111 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.670173 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.670198 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.670222 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.670239 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.714367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.714420 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.714440 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.714462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.714479 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.729018 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.733259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.733305 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.733324 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.733346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.733363 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.744451 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.748879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.749582 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.749823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.750231 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.750589 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.760595 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.765530 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.765591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.765608 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.765631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.765649 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.781060 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.796009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.796057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.796074 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.796096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.796117 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.813559 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:14 crc kubenswrapper[4861]: E0310 18:49:14.813802 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.815942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.815983 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.816000 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.816023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.816040 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.919276 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.919323 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.919340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.919363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:14 crc kubenswrapper[4861]: I0310 18:49:14.919379 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:14Z","lastTransitionTime":"2026-03-10T18:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.021508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.021556 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.021571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.021594 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.021612 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.124040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.124089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.124108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.124130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.124147 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.226979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.227026 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.227042 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.227066 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.227083 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.330207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.330252 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.330268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.330291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.330309 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.432950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.433013 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.433029 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.433054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.433074 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.536398 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.536458 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.536475 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.536498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.536515 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.642524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.642598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.642615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.642641 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.642662 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.745466 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.745931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.746108 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.746265 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.746405 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.849774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.849814 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.849824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.849841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.849852 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.952739 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.952797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.952815 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.952843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.952862 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:15Z","lastTransitionTime":"2026-03-10T18:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.957046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.957077 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:15 crc kubenswrapper[4861]: E0310 18:49:15.957210 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:15 crc kubenswrapper[4861]: I0310 18:49:15.957260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:15 crc kubenswrapper[4861]: E0310 18:49:15.957352 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:15 crc kubenswrapper[4861]: E0310 18:49:15.957428 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.055283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.055375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.055393 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.055415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.055431 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.158362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.158437 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.158455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.158928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.158977 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.261649 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.261703 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.261745 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.261766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.261785 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.364973 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.365021 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.365037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.365059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.365081 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.468058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.468130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.468154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.468179 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.468197 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.571304 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.571356 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.571372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.571398 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.571415 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.674266 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.674327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.674343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.674367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.674391 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.776940 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.776997 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.777015 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.777037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.777056 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.880945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.881030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.881057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.881087 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.881105 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.972050 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.983621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.983687 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.983704 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.983756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.983774 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:16Z","lastTransitionTime":"2026-03-10T18:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:16 crc kubenswrapper[4861]: I0310 18:49:16.991357 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.003475 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.017797 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.031225 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.058659 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.075542 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.086610 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.086665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.086683 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.086742 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.086766 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.090652 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.190261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.190497 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.190639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.190826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.190967 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.293505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.293566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.293578 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.293599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.293612 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.395575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.395840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.396025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.396194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.396259 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.499424 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.499820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.500003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.500420 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.500607 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.604314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.604645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.604804 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.604979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.605320 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.708619 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.708679 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.708697 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.708757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.708775 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.811523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.811887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.812107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.812251 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.812381 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.916365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.916452 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.916476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.916509 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.916531 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:17Z","lastTransitionTime":"2026-03-10T18:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.958035 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.958103 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:17 crc kubenswrapper[4861]: I0310 18:49:17.958171 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:17 crc kubenswrapper[4861]: E0310 18:49:17.958420 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:17 crc kubenswrapper[4861]: E0310 18:49:17.958617 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:17 crc kubenswrapper[4861]: E0310 18:49:17.958946 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.020374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.020665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.020865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.021028 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.021179 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.124153 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.124456 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.124631 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.124820 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.125094 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.227668 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.227911 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.227952 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.227979 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.228003 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.331143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.331207 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.331224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.331249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.331268 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.433493 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.433558 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.433575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.433599 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.433620 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.536835 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.536887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.536900 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.536920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.536932 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.639170 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.639259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.639284 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.639317 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.639338 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.742337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.742392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.742409 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.742430 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.742445 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.845050 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.845087 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.845096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.845110 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.845118 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.948107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.948863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.948901 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.948927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.948945 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:18Z","lastTransitionTime":"2026-03-10T18:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:18 crc kubenswrapper[4861]: I0310 18:49:18.958465 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:49:18 crc kubenswrapper[4861]: E0310 18:49:18.958815 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 10 18:49:18 crc kubenswrapper[4861]: E0310 18:49:18.959340 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:18 crc kubenswrapper[4861]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:18 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:18 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:18 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:18 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:18 crc kubenswrapper[4861]: fi Mar 10 18:49:18 crc kubenswrapper[4861]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 10 18:49:18 crc kubenswrapper[4861]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 10 18:49:18 crc kubenswrapper[4861]: ho_enable="--enable-hybrid-overlay" Mar 10 18:49:18 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 10 18:49:18 crc kubenswrapper[4861]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 10 18:49:18 crc kubenswrapper[4861]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 10 18:49:18 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:18 crc kubenswrapper[4861]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 10 18:49:18 crc kubenswrapper[4861]: --webhook-host=127.0.0.1 \ Mar 10 18:49:18 crc kubenswrapper[4861]: --webhook-port=9743 \ Mar 10 18:49:18 crc kubenswrapper[4861]: ${ho_enable} \ Mar 10 18:49:18 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:49:18 crc kubenswrapper[4861]: --disable-approver \ Mar 10 18:49:18 crc kubenswrapper[4861]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 10 18:49:18 crc kubenswrapper[4861]: --wait-for-kubernetes-api=200s \ Mar 10 18:49:18 crc kubenswrapper[4861]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 10 18:49:18 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:18 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:18 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:18 crc kubenswrapper[4861]: E0310 18:49:18.962774 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:18 crc kubenswrapper[4861]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:18 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:18 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:18 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:18 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:18 crc kubenswrapper[4861]: fi Mar 10 18:49:18 crc kubenswrapper[4861]: Mar 10 18:49:18 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 10 18:49:18 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:18 crc kubenswrapper[4861]: --disable-webhook \ Mar 10 18:49:18 crc kubenswrapper[4861]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 10 18:49:18 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:18 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:18 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:18 crc kubenswrapper[4861]: E0310 18:49:18.964795 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.052890 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.052946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.052963 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.052987 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.053006 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.156346 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.156429 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.156453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.156487 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.156512 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.284316 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.284370 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.284388 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.284417 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.284442 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.386893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.386952 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.386970 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.386994 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.387012 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.489781 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.489843 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.489863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.489887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.489905 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.592415 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.592476 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.592499 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.592523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.592541 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.696374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.696439 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.696457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.696486 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.696506 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.799797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.799862 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.799879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.799906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.799923 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.903283 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.903350 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.903368 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.903392 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.903409 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:19Z","lastTransitionTime":"2026-03-10T18:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.958060 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:19 crc kubenswrapper[4861]: E0310 18:49:19.958243 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.958078 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:19 crc kubenswrapper[4861]: E0310 18:49:19.958448 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:19 crc kubenswrapper[4861]: I0310 18:49:19.958887 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:19 crc kubenswrapper[4861]: E0310 18:49:19.959129 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:19 crc kubenswrapper[4861]: E0310 18:49:19.961464 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:19 crc kubenswrapper[4861]: E0310 18:49:19.962732 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.007139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.007195 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.007213 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.007237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.007254 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.110033 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.110095 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.110113 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.110137 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.110154 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.213223 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.213291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.213314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.213345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.213368 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.317048 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.317257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.317420 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.317562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.317756 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.420884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.421248 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.421372 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.421491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.421651 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.524607 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.524665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.524684 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.524756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.524775 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.628009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.628059 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.628077 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.628102 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.628120 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.731453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.731515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.731534 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.731558 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.731577 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.834249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.834544 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.834650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.834750 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.834828 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.938498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.938569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.938588 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.938612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:20 crc kubenswrapper[4861]: I0310 18:49:20.938630 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:20Z","lastTransitionTime":"2026-03-10T18:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:20 crc kubenswrapper[4861]: E0310 18:49:20.959894 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:20 crc kubenswrapper[4861]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:20 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:20 crc kubenswrapper[4861]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 10 18:49:20 crc kubenswrapper[4861]: source /etc/kubernetes/apiserver-url.env Mar 10 18:49:20 crc kubenswrapper[4861]: else Mar 10 18:49:20 crc kubenswrapper[4861]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 10 18:49:20 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:20 crc kubenswrapper[4861]: fi Mar 10 18:49:20 crc kubenswrapper[4861]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 10 18:49:20 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:20 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:20 crc kubenswrapper[4861]: E0310 18:49:20.961625 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.042287 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.042666 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.042870 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.043012 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.043144 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.146844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.147278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.147462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.147662 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.147874 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.251376 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.251430 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.251449 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.251474 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.251496 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.255101 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-b87lw"] Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.255529 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.258627 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.259314 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.259365 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.273848 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.288418 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.304247 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.331502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.352559 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.354895 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.354946 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.354974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.355007 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.355031 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.355986 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7pf\" (UniqueName: \"kubernetes.io/projected/e474cdc9-b374-49a6-aece-afa19f8d5ee6-kube-api-access-5t7pf\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.356095 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e474cdc9-b374-49a6-aece-afa19f8d5ee6-hosts-file\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.365490 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.380907 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.397063 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.411955 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.456789 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7pf\" (UniqueName: \"kubernetes.io/projected/e474cdc9-b374-49a6-aece-afa19f8d5ee6-kube-api-access-5t7pf\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457186 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e474cdc9-b374-49a6-aece-afa19f8d5ee6-hosts-file\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457422 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.457479 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e474cdc9-b374-49a6-aece-afa19f8d5ee6-hosts-file\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.484118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7pf\" (UniqueName: \"kubernetes.io/projected/e474cdc9-b374-49a6-aece-afa19f8d5ee6-kube-api-access-5t7pf\") pod \"node-resolver-b87lw\" (UID: \"e474cdc9-b374-49a6-aece-afa19f8d5ee6\") " pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.560082 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.560146 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.560165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.560189 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.560205 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.579419 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-b87lw" Mar 10 18:49:21 crc kubenswrapper[4861]: W0310 18:49:21.598386 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode474cdc9_b374_49a6_aece_afa19f8d5ee6.slice/crio-63bebeaaed4736ca40bc041ab390c060ff017fb22a5236216fa03c0532f2105c WatchSource:0}: Error finding container 63bebeaaed4736ca40bc041ab390c060ff017fb22a5236216fa03c0532f2105c: Status 404 returned error can't find the container with id 63bebeaaed4736ca40bc041ab390c060ff017fb22a5236216fa03c0532f2105c Mar 10 18:49:21 crc kubenswrapper[4861]: E0310 18:49:21.601562 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:21 crc kubenswrapper[4861]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:21 crc kubenswrapper[4861]: set -uo pipefail Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 10 18:49:21 crc kubenswrapper[4861]: HOSTS_FILE="/etc/hosts" Mar 10 18:49:21 crc kubenswrapper[4861]: TEMP_FILE="/etc/hosts.tmp" Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: # Make a temporary file with the old hosts file's attributes. Mar 10 18:49:21 crc kubenswrapper[4861]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 10 18:49:21 crc kubenswrapper[4861]: echo "Failed to preserve hosts file. Exiting." Mar 10 18:49:21 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:21 crc kubenswrapper[4861]: fi Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: while true; do Mar 10 18:49:21 crc kubenswrapper[4861]: declare -A svc_ips Mar 10 18:49:21 crc kubenswrapper[4861]: for svc in "${services[@]}"; do Mar 10 18:49:21 crc kubenswrapper[4861]: # Fetch service IP from cluster dns if present. We make several tries Mar 10 18:49:21 crc kubenswrapper[4861]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 10 18:49:21 crc kubenswrapper[4861]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 10 18:49:21 crc kubenswrapper[4861]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 10 18:49:21 crc kubenswrapper[4861]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:21 crc kubenswrapper[4861]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:21 crc kubenswrapper[4861]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:21 crc kubenswrapper[4861]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 10 18:49:21 crc kubenswrapper[4861]: for i in ${!cmds[*]} Mar 10 18:49:21 crc kubenswrapper[4861]: do Mar 10 18:49:21 crc kubenswrapper[4861]: ips=($(eval "${cmds[i]}")) Mar 10 18:49:21 crc kubenswrapper[4861]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 10 18:49:21 crc kubenswrapper[4861]: svc_ips["${svc}"]="${ips[@]}" Mar 10 18:49:21 crc kubenswrapper[4861]: break Mar 10 18:49:21 crc kubenswrapper[4861]: fi Mar 10 18:49:21 crc kubenswrapper[4861]: done Mar 10 18:49:21 crc kubenswrapper[4861]: done Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: # Update /etc/hosts only if we get valid service IPs Mar 10 18:49:21 crc kubenswrapper[4861]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 10 18:49:21 crc kubenswrapper[4861]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 10 18:49:21 crc kubenswrapper[4861]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 10 18:49:21 crc kubenswrapper[4861]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 10 18:49:21 crc kubenswrapper[4861]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 10 18:49:21 crc kubenswrapper[4861]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 10 18:49:21 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:21 crc kubenswrapper[4861]: continue Mar 10 18:49:21 crc kubenswrapper[4861]: fi Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: # Append resolver entries for services Mar 10 18:49:21 crc kubenswrapper[4861]: rc=0 Mar 10 18:49:21 crc kubenswrapper[4861]: for svc in "${!svc_ips[@]}"; do Mar 10 18:49:21 crc kubenswrapper[4861]: for ip in ${svc_ips[${svc}]}; do Mar 10 18:49:21 crc kubenswrapper[4861]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 10 18:49:21 crc kubenswrapper[4861]: done Mar 10 18:49:21 crc kubenswrapper[4861]: done Mar 10 18:49:21 crc kubenswrapper[4861]: if [[ $rc -ne 0 ]]; then Mar 10 18:49:21 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:21 crc kubenswrapper[4861]: continue Mar 10 18:49:21 crc kubenswrapper[4861]: fi Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: Mar 10 18:49:21 crc kubenswrapper[4861]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 10 18:49:21 crc kubenswrapper[4861]: # Replace /etc/hosts with our modified version if needed Mar 10 18:49:21 crc kubenswrapper[4861]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 10 18:49:21 crc kubenswrapper[4861]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 10 18:49:21 crc kubenswrapper[4861]: fi Mar 10 18:49:21 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:21 crc kubenswrapper[4861]: unset svc_ips Mar 10 18:49:21 crc kubenswrapper[4861]: done Mar 10 18:49:21 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5t7pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-b87lw_openshift-dns(e474cdc9-b374-49a6-aece-afa19f8d5ee6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:21 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:21 crc kubenswrapper[4861]: E0310 18:49:21.603391 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-b87lw" podUID="e474cdc9-b374-49a6-aece-afa19f8d5ee6" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.613087 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6lblg"] Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.613562 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qttbr"] Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.614016 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j2s27"] Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.614527 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.614783 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.616381 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.620871 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621207 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621278 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621595 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621630 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621734 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621746 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.621926 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.622033 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.622100 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.622298 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.622570 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.632997 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.653027 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659162 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-multus\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-system-cni-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659504 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659602 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlddg\" (UniqueName: \"kubernetes.io/projected/391f4bfa-b94c-4b25-8f06-a2f19f912194-kube-api-access-vlddg\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659718 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/771189c2-452d-4204-a0b7-abfe9ba62bd0-proxy-tls\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659809 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.659916 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-kubelet\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-socket-dir-parent\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660128 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-k8s-cni-cncf-io\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-hostroot\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660331 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/771189c2-452d-4204-a0b7-abfe9ba62bd0-rootfs\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660422 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng72\" (UniqueName: \"kubernetes.io/projected/771189c2-452d-4204-a0b7-abfe9ba62bd0-kube-api-access-tng72\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660519 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-cnibin\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660616 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-conf-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-system-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.660892 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-cnibin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661010 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-bin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661149 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-os-release\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661253 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-multus-certs\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-os-release\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-netns\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661480 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-daemon-config\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-etc-kubernetes\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gvk\" (UniqueName: \"kubernetes.io/projected/d1c251f4-6539-4aa1-8979-47e74495aca3-kube-api-access-t2gvk\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.661901 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-cni-binary-copy\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.662012 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/771189c2-452d-4204-a0b7-abfe9ba62bd0-mcd-auth-proxy-config\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.662114 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-binary-copy\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.663264 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.663327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.663351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.663379 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.663400 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.670989 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.687014 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.704252 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.720824 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.735405 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.748521 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762457 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-conf-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762805 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-os-release\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762884 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-system-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762885 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-conf-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762953 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-cnibin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.762988 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-bin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-os-release\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-multus-certs\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763108 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-cnibin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763095 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-netns\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763165 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-daemon-config\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763205 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-multus-certs\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-etc-kubernetes\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763252 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-bin\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763282 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-os-release\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763357 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gvk\" (UniqueName: \"kubernetes.io/projected/d1c251f4-6539-4aa1-8979-47e74495aca3-kube-api-access-t2gvk\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763426 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/771189c2-452d-4204-a0b7-abfe9ba62bd0-mcd-auth-proxy-config\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763459 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-binary-copy\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-cni-binary-copy\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763556 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-netns\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763639 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763660 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-os-release\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-multus\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-system-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763728 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-system-cni-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763762 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlddg\" (UniqueName: \"kubernetes.io/projected/391f4bfa-b94c-4b25-8f06-a2f19f912194-kube-api-access-vlddg\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763879 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/771189c2-452d-4204-a0b7-abfe9ba62bd0-proxy-tls\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763951 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764037 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-kubelet\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764106 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-socket-dir-parent\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764136 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-k8s-cni-cncf-io\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-hostroot\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764294 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/771189c2-452d-4204-a0b7-abfe9ba62bd0-rootfs\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764328 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng72\" (UniqueName: \"kubernetes.io/projected/771189c2-452d-4204-a0b7-abfe9ba62bd0-kube-api-access-tng72\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764416 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-cnibin\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-cnibin\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-cni-dir\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764840 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-daemon-config\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-kubelet\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.764988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-hostroot\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.765094 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/771189c2-452d-4204-a0b7-abfe9ba62bd0-rootfs\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.765130 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-multus-socket-dir-parent\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.765140 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-run-k8s-cni-cncf-io\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.763843 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-etc-kubernetes\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.765662 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d1c251f4-6539-4aa1-8979-47e74495aca3-host-var-lib-cni-multus\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.765692 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-system-cni-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.766098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/391f4bfa-b94c-4b25-8f06-a2f19f912194-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.766549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d1c251f4-6539-4aa1-8979-47e74495aca3-cni-binary-copy\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767047 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767106 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767305 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.767977 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-binary-copy\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.768102 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/391f4bfa-b94c-4b25-8f06-a2f19f912194-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.768289 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/771189c2-452d-4204-a0b7-abfe9ba62bd0-mcd-auth-proxy-config\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.771995 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.776278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/771189c2-452d-4204-a0b7-abfe9ba62bd0-proxy-tls\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.789694 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng72\" (UniqueName: \"kubernetes.io/projected/771189c2-452d-4204-a0b7-abfe9ba62bd0-kube-api-access-tng72\") pod \"machine-config-daemon-qttbr\" (UID: \"771189c2-452d-4204-a0b7-abfe9ba62bd0\") " pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.793285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gvk\" (UniqueName: \"kubernetes.io/projected/d1c251f4-6539-4aa1-8979-47e74495aca3-kube-api-access-t2gvk\") pod \"multus-6lblg\" (UID: \"d1c251f4-6539-4aa1-8979-47e74495aca3\") " pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.794356 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlddg\" (UniqueName: \"kubernetes.io/projected/391f4bfa-b94c-4b25-8f06-a2f19f912194-kube-api-access-vlddg\") pod \"multus-additional-cni-plugins-j2s27\" (UID: \"391f4bfa-b94c-4b25-8f06-a2f19f912194\") " pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.990431 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6lblg" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.990509 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:21 crc kubenswrapper[4861]: E0310 18:49:21.990627 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.990700 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:21 crc kubenswrapper[4861]: E0310 18:49:21.990822 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.990464 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.991010 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.991090 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j2s27" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.991291 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:49:21 crc kubenswrapper[4861]: E0310 18:49:21.991622 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.992354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.992377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.992407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.992421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:21 crc kubenswrapper[4861]: I0310 18:49:21.992431 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:21Z","lastTransitionTime":"2026-03-10T18:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.003804 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s2l62"] Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.003909 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.004500 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.006436 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.006785 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.006936 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.007058 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.007153 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.007232 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.007318 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.008772 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:22 crc kubenswrapper[4861]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 10 18:49:22 crc kubenswrapper[4861]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 10 18:49:22 crc kubenswrapper[4861]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2gvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6lblg_openshift-multus(d1c251f4-6539-4aa1-8979-47e74495aca3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:22 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.009985 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6lblg" podUID="d1c251f4-6539-4aa1-8979-47e74495aca3" Mar 10 18:49:22 crc kubenswrapper[4861]: W0310 18:49:22.011234 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod391f4bfa_b94c_4b25_8f06_a2f19f912194.slice/crio-0fa0a3b2056499e22d3b7e44777f52860b43a3235c933ae6619b1245e1a3e836 WatchSource:0}: Error finding container 0fa0a3b2056499e22d3b7e44777f52860b43a3235c933ae6619b1245e1a3e836: Status 404 returned error can't find the container with id 0fa0a3b2056499e22d3b7e44777f52860b43a3235c933ae6619b1245e1a3e836 Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.012697 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.013333 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlddg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-j2s27_openshift-multus(391f4bfa-b94c-4b25-8f06-a2f19f912194): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.014052 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.014419 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-j2s27" podUID="391f4bfa-b94c-4b25-8f06-a2f19f912194" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.015228 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.016325 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.024358 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.038117 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.046286 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.053728 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.064074 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.071112 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.077060 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.086064 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091543 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091583 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091603 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091623 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091647 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091667 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091728 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091756 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091785 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091816 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091875 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091906 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091962 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.091981 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.092002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.098964 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.099571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.099639 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.099664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.099694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.099751 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.107266 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.114215 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.122593 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.132013 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.149904 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.159058 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.171300 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.180767 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192052 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192842 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192908 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192949 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.192999 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193179 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193130 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193505 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193615 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193689 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193794 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.193749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194373 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194418 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194505 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194538 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194581 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194647 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194645 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194694 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194773 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtwj\" (UniqueName: \"kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.194761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.195048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.195216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.198285 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.202429 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.202546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.202636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.202780 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.202862 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.210061 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.218310 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.226305 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.239477 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.296177 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.296746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtwj\" (UniqueName: \"kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.297007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.297764 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.302323 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.305787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.305815 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.305824 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.305839 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.305848 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.316449 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtwj\" (UniqueName: \"kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj\") pod \"ovnkube-node-s2l62\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.324937 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:22 crc kubenswrapper[4861]: W0310 18:49:22.334116 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe820cd7_b3a7_4183_a408_67151247b6ee.slice/crio-6be1f6974a0888bc87217cfaadcdcaee5620a7a03535fb885811b71fc73b8a58 WatchSource:0}: Error finding container 6be1f6974a0888bc87217cfaadcdcaee5620a7a03535fb885811b71fc73b8a58: Status 404 returned error can't find the container with id 6be1f6974a0888bc87217cfaadcdcaee5620a7a03535fb885811b71fc73b8a58 Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.335646 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:22 crc kubenswrapper[4861]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 10 18:49:22 crc kubenswrapper[4861]: apiVersion: v1 Mar 10 18:49:22 crc kubenswrapper[4861]: clusters: Mar 10 18:49:22 crc kubenswrapper[4861]: - cluster: Mar 10 18:49:22 crc kubenswrapper[4861]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 10 18:49:22 crc kubenswrapper[4861]: server: https://api-int.crc.testing:6443 Mar 10 18:49:22 crc kubenswrapper[4861]: name: default-cluster Mar 10 18:49:22 crc kubenswrapper[4861]: contexts: Mar 10 18:49:22 crc kubenswrapper[4861]: - context: Mar 10 18:49:22 crc kubenswrapper[4861]: cluster: default-cluster Mar 10 18:49:22 crc kubenswrapper[4861]: namespace: default Mar 10 18:49:22 crc kubenswrapper[4861]: user: default-auth Mar 10 18:49:22 crc kubenswrapper[4861]: name: default-context Mar 10 18:49:22 crc kubenswrapper[4861]: current-context: default-context Mar 10 18:49:22 crc kubenswrapper[4861]: kind: Config Mar 10 18:49:22 crc kubenswrapper[4861]: preferences: {} Mar 10 18:49:22 crc kubenswrapper[4861]: users: Mar 10 18:49:22 crc kubenswrapper[4861]: - name: default-auth Mar 10 18:49:22 crc kubenswrapper[4861]: user: Mar 10 18:49:22 crc kubenswrapper[4861]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:22 crc kubenswrapper[4861]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:22 crc kubenswrapper[4861]: EOF Mar 10 18:49:22 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwtwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:22 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.337180 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.395927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerStarted","Data":"55088095254e21421ed9201ba791d062828c5ed8dd378c177054b2d99c154cce"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.397054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b87lw" event={"ID":"e474cdc9-b374-49a6-aece-afa19f8d5ee6","Type":"ContainerStarted","Data":"63bebeaaed4736ca40bc041ab390c060ff017fb22a5236216fa03c0532f2105c"} Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.399056 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:22 crc kubenswrapper[4861]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 10 18:49:22 crc kubenswrapper[4861]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 10 18:49:22 crc kubenswrapper[4861]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2gvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6lblg_openshift-multus(d1c251f4-6539-4aa1-8979-47e74495aca3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:22 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.399143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"a30dbe9170714beb7fb0e958424c78121f4557c2b5e5e4a09ac2449b552c44f0"} Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.399370 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:22 crc kubenswrapper[4861]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:22 crc kubenswrapper[4861]: set -uo pipefail Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 10 18:49:22 crc kubenswrapper[4861]: HOSTS_FILE="/etc/hosts" Mar 10 18:49:22 crc kubenswrapper[4861]: TEMP_FILE="/etc/hosts.tmp" Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: # Make a temporary file with the old hosts file's attributes. Mar 10 18:49:22 crc kubenswrapper[4861]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 10 18:49:22 crc kubenswrapper[4861]: echo "Failed to preserve hosts file. Exiting." Mar 10 18:49:22 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:22 crc kubenswrapper[4861]: fi Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: while true; do Mar 10 18:49:22 crc kubenswrapper[4861]: declare -A svc_ips Mar 10 18:49:22 crc kubenswrapper[4861]: for svc in "${services[@]}"; do Mar 10 18:49:22 crc kubenswrapper[4861]: # Fetch service IP from cluster dns if present. We make several tries Mar 10 18:49:22 crc kubenswrapper[4861]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 10 18:49:22 crc kubenswrapper[4861]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 10 18:49:22 crc kubenswrapper[4861]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 10 18:49:22 crc kubenswrapper[4861]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:22 crc kubenswrapper[4861]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:22 crc kubenswrapper[4861]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:22 crc kubenswrapper[4861]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 10 18:49:22 crc kubenswrapper[4861]: for i in ${!cmds[*]} Mar 10 18:49:22 crc kubenswrapper[4861]: do Mar 10 18:49:22 crc kubenswrapper[4861]: ips=($(eval "${cmds[i]}")) Mar 10 18:49:22 crc kubenswrapper[4861]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 10 18:49:22 crc kubenswrapper[4861]: svc_ips["${svc}"]="${ips[@]}" Mar 10 18:49:22 crc kubenswrapper[4861]: break Mar 10 18:49:22 crc kubenswrapper[4861]: fi Mar 10 18:49:22 crc kubenswrapper[4861]: done Mar 10 18:49:22 crc kubenswrapper[4861]: done Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: # Update /etc/hosts only if we get valid service IPs Mar 10 18:49:22 crc kubenswrapper[4861]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 10 18:49:22 crc kubenswrapper[4861]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 10 18:49:22 crc kubenswrapper[4861]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 10 18:49:22 crc kubenswrapper[4861]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 10 18:49:22 crc kubenswrapper[4861]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 10 18:49:22 crc kubenswrapper[4861]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 10 18:49:22 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:22 crc kubenswrapper[4861]: continue Mar 10 18:49:22 crc kubenswrapper[4861]: fi Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: # Append resolver entries for services Mar 10 18:49:22 crc kubenswrapper[4861]: rc=0 Mar 10 18:49:22 crc kubenswrapper[4861]: for svc in "${!svc_ips[@]}"; do Mar 10 18:49:22 crc kubenswrapper[4861]: for ip in ${svc_ips[${svc}]}; do Mar 10 18:49:22 crc kubenswrapper[4861]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 10 18:49:22 crc kubenswrapper[4861]: done Mar 10 18:49:22 crc kubenswrapper[4861]: done Mar 10 18:49:22 crc kubenswrapper[4861]: if [[ $rc -ne 0 ]]; then Mar 10 18:49:22 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:22 crc kubenswrapper[4861]: continue Mar 10 18:49:22 crc kubenswrapper[4861]: fi Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: Mar 10 18:49:22 crc kubenswrapper[4861]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 10 18:49:22 crc kubenswrapper[4861]: # Replace /etc/hosts with our modified version if needed Mar 10 18:49:22 crc kubenswrapper[4861]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 10 18:49:22 crc kubenswrapper[4861]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 10 18:49:22 crc kubenswrapper[4861]: fi Mar 10 18:49:22 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:22 crc kubenswrapper[4861]: unset svc_ips Mar 10 18:49:22 crc kubenswrapper[4861]: done Mar 10 18:49:22 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5t7pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-b87lw_openshift-dns(e474cdc9-b374-49a6-aece-afa19f8d5ee6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:22 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.400243 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6lblg" podUID="d1c251f4-6539-4aa1-8979-47e74495aca3" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.400666 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerStarted","Data":"0fa0a3b2056499e22d3b7e44777f52860b43a3235c933ae6619b1245e1a3e836"} Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.400653 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-b87lw" podUID="e474cdc9-b374-49a6-aece-afa19f8d5ee6" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.401589 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.401756 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlddg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-j2s27_openshift-multus(391f4bfa-b94c-4b25-8f06-a2f19f912194): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.402090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"6be1f6974a0888bc87217cfaadcdcaee5620a7a03535fb885811b71fc73b8a58"} Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.402986 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-j2s27" podUID="391f4bfa-b94c-4b25-8f06-a2f19f912194" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.408138 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.408561 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:22 crc kubenswrapper[4861]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 10 18:49:22 crc kubenswrapper[4861]: apiVersion: v1 Mar 10 18:49:22 crc kubenswrapper[4861]: clusters: Mar 10 18:49:22 crc kubenswrapper[4861]: - cluster: Mar 10 18:49:22 crc kubenswrapper[4861]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 10 18:49:22 crc kubenswrapper[4861]: server: https://api-int.crc.testing:6443 Mar 10 18:49:22 crc kubenswrapper[4861]: name: default-cluster Mar 10 18:49:22 crc kubenswrapper[4861]: contexts: Mar 10 18:49:22 crc kubenswrapper[4861]: - context: Mar 10 18:49:22 crc kubenswrapper[4861]: cluster: default-cluster Mar 10 18:49:22 crc kubenswrapper[4861]: namespace: default Mar 10 18:49:22 crc kubenswrapper[4861]: user: default-auth Mar 10 18:49:22 crc kubenswrapper[4861]: name: default-context Mar 10 18:49:22 crc kubenswrapper[4861]: current-context: default-context Mar 10 18:49:22 crc kubenswrapper[4861]: kind: Config Mar 10 18:49:22 crc kubenswrapper[4861]: preferences: {} Mar 10 18:49:22 crc kubenswrapper[4861]: users: Mar 10 18:49:22 crc kubenswrapper[4861]: - name: default-auth Mar 10 18:49:22 crc kubenswrapper[4861]: user: Mar 10 18:49:22 crc kubenswrapper[4861]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:22 crc kubenswrapper[4861]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:22 crc kubenswrapper[4861]: EOF Mar 10 18:49:22 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwtwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:22 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.408798 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.408823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.408833 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.408846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.408856 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.409061 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.410052 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:49:22 crc kubenswrapper[4861]: E0310 18:49:22.409279 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.422120 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.430831 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.447778 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.466368 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.478746 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.492786 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.507858 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.511977 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.512021 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.512035 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.512055 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.512072 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.519796 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.545997 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.565289 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.580070 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.593237 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.614410 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.615191 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.615236 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.615249 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.615267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.615279 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.627432 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.641842 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.653855 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.670046 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.696512 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.710217 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.718428 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.718482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.718499 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.718523 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.718541 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.724592 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.734188 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.746427 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.769366 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.806179 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.821300 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.821363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.821412 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.821441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.821461 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.828435 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.925196 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.925267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.925286 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.925512 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:22 crc kubenswrapper[4861]: I0310 18:49:22.925531 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:22Z","lastTransitionTime":"2026-03-10T18:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.028648 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.028698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.028766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.028803 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.028822 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.131330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.131402 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.131422 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.131446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.131463 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.234302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.234359 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.234378 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.234400 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.234417 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.336863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.336924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.336942 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.336965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.336983 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.440158 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.440327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.440363 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.440441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.440474 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.543352 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.543410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.543488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.543514 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.543571 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.646287 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.646338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.646355 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.646375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.646392 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.748758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.748810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.748828 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.748848 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.748863 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.852515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.852579 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.852598 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.852623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.852642 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.955860 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.955928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.955949 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.955978 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.955998 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:23Z","lastTransitionTime":"2026-03-10T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.957080 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.957163 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:23 crc kubenswrapper[4861]: E0310 18:49:23.957220 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:23 crc kubenswrapper[4861]: I0310 18:49:23.957087 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:23 crc kubenswrapper[4861]: E0310 18:49:23.957375 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:23 crc kubenswrapper[4861]: E0310 18:49:23.957579 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.058980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.059032 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.059049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.059072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.059088 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.161701 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.161800 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.161888 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.161927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.161945 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.264294 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.264345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.264362 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.264384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.264400 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.367454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.367546 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.367566 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.367586 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.367601 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.471651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.471771 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.471795 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.471823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.471852 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.575271 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.575344 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.575366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.575395 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.575418 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.678744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.678819 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.678837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.678861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.678880 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.784881 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.784939 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.784956 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.784980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.784997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.887808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.888202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.888423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.888628 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.888917 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.992285 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.992364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.992383 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.992411 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:24 crc kubenswrapper[4861]: I0310 18:49:24.992435 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:24Z","lastTransitionTime":"2026-03-10T18:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.068793 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.068865 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.068884 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.068913 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.068933 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.084098 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.088971 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.089022 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.089040 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.089109 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.089131 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.103794 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.108143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.108194 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.108216 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.108240 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.108258 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.123239 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.128054 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.128123 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.128144 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.128176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.128199 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.142738 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.147640 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.147698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.147744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.147771 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.147789 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.163410 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.163658 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.166180 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.166235 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.166254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.166278 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.166296 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.269449 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.269534 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.269554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.269580 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.269602 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.372447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.372528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.372553 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.372585 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.372608 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.475920 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.475986 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.476003 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.476030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.476048 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.579454 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.579519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.579536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.579562 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.579583 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.683627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.683743 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.683765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.683799 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.683821 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.788303 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.788367 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.788385 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.788418 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.788435 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.890907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.890986 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.891008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.891037 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.891058 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.957481 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.957523 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.957612 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.957742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.957752 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:25 crc kubenswrapper[4861]: E0310 18:49:25.958143 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.994632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.994696 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.994756 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.994788 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:25 crc kubenswrapper[4861]: I0310 18:49:25.994811 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:25Z","lastTransitionTime":"2026-03-10T18:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.098434 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.098488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.098506 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.098528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.098544 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.201159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.201224 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.201238 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.201261 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.201275 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.304627 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.304682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.304699 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.304760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.305158 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.408453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.408513 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.408531 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.408556 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.408578 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.511069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.511144 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.511166 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.511227 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.511252 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.613866 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.614006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.614030 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.614057 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.614076 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.716112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.716162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.716174 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.716193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.716207 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.819008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.819056 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.819072 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.819098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.819113 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.922279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.922354 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.922377 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.922407 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.922429 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:26Z","lastTransitionTime":"2026-03-10T18:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.973690 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:26 crc kubenswrapper[4861]: I0310 18:49:26.988651 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.001384 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.024127 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.036432 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.046506 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.051060 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.051103 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.051115 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.051134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.051147 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.061447 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.079865 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.090540 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.102655 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.120976 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.134252 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.143190 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.153943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.154145 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.154289 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.154432 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.154563 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.257022 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.257068 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.257080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.257098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.257112 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.359548 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.359612 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.359632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.359655 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.359673 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.463332 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.463396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.463416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.463441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.463461 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.566757 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.566826 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.566850 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.566879 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.566902 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.670168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.670247 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.670272 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.670302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.670325 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.773329 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.773396 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.773421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.773449 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.773470 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.837807 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pzmsp"] Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.838447 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.842198 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.842845 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.843130 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.843389 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.855575 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.855754 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:49:59.855686613 +0000 UTC m=+143.619122613 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.855834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.856117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.856258 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.856333 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:59.856311624 +0000 UTC m=+143.619747614 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.857551 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.857738 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:59.857629787 +0000 UTC m=+143.621065787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.859616 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.871543 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.876338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.876416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.876434 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.876460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.876478 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.890271 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.906261 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.921078 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.946393 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957277 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957429 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957474 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957520 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957745 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957629 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957769 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66631e59-ce5c-44de-8a4d-37eb82acf997-serviceca\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957913 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957955 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.957976 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.958033 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.957916 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66631e59-ce5c-44de-8a4d-37eb82acf997-host\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.958061 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.958081 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.958043 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:59.95801891 +0000 UTC m=+143.721454900 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.958158 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4t5\" (UniqueName: \"kubernetes.io/projected/66631e59-ce5c-44de-8a4d-37eb82acf997-kube-api-access-pm4t5\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:27 crc kubenswrapper[4861]: E0310 18:49:27.958218 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:59.958199373 +0000 UTC m=+143.721635363 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.961756 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.976773 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.979291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.979338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.979360 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.979387 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.979409 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:27Z","lastTransitionTime":"2026-03-10T18:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:27 crc kubenswrapper[4861]: I0310 18:49:27.993544 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.004805 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.032576 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.047542 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.058825 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66631e59-ce5c-44de-8a4d-37eb82acf997-host\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.058883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4t5\" (UniqueName: \"kubernetes.io/projected/66631e59-ce5c-44de-8a4d-37eb82acf997-kube-api-access-pm4t5\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.058954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66631e59-ce5c-44de-8a4d-37eb82acf997-serviceca\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.059012 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66631e59-ce5c-44de-8a4d-37eb82acf997-host\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.060460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/66631e59-ce5c-44de-8a4d-37eb82acf997-serviceca\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.060876 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.079016 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.081860 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.081906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.081922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.081944 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.081962 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.088775 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4t5\" (UniqueName: \"kubernetes.io/projected/66631e59-ce5c-44de-8a4d-37eb82acf997-kube-api-access-pm4t5\") pod \"node-ca-pzmsp\" (UID: \"66631e59-ce5c-44de-8a4d-37eb82acf997\") " pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.169129 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pzmsp" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.185352 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.185403 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.185421 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.185445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.185462 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: W0310 18:49:28.187309 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66631e59_ce5c_44de_8a4d_37eb82acf997.slice/crio-a444dda8354bc6738d1cf1cbdccdaa700ea7248d2a64aec7ed0091a1d1d2dd34 WatchSource:0}: Error finding container a444dda8354bc6738d1cf1cbdccdaa700ea7248d2a64aec7ed0091a1d1d2dd34: Status 404 returned error can't find the container with id a444dda8354bc6738d1cf1cbdccdaa700ea7248d2a64aec7ed0091a1d1d2dd34 Mar 10 18:49:28 crc kubenswrapper[4861]: E0310 18:49:28.189851 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:28 crc kubenswrapper[4861]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 10 18:49:28 crc kubenswrapper[4861]: while [ true ]; Mar 10 18:49:28 crc kubenswrapper[4861]: do Mar 10 18:49:28 crc kubenswrapper[4861]: for f in $(ls /tmp/serviceca); do Mar 10 18:49:28 crc kubenswrapper[4861]: echo $f Mar 10 18:49:28 crc kubenswrapper[4861]: ca_file_path="/tmp/serviceca/${f}" Mar 10 18:49:28 crc kubenswrapper[4861]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 10 18:49:28 crc kubenswrapper[4861]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 10 18:49:28 crc kubenswrapper[4861]: if [ -e "${reg_dir_path}" ]; then Mar 10 18:49:28 crc kubenswrapper[4861]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 10 18:49:28 crc kubenswrapper[4861]: else Mar 10 18:49:28 crc kubenswrapper[4861]: mkdir $reg_dir_path Mar 10 18:49:28 crc kubenswrapper[4861]: cp $ca_file_path $reg_dir_path/ca.crt Mar 10 18:49:28 crc kubenswrapper[4861]: fi Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: for d in $(ls /etc/docker/certs.d); do Mar 10 18:49:28 crc kubenswrapper[4861]: echo $d Mar 10 18:49:28 crc kubenswrapper[4861]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 10 18:49:28 crc kubenswrapper[4861]: reg_conf_path="/tmp/serviceca/${dp}" Mar 10 18:49:28 crc kubenswrapper[4861]: if [ ! -e "${reg_conf_path}" ]; then Mar 10 18:49:28 crc kubenswrapper[4861]: rm -rf /etc/docker/certs.d/$d Mar 10 18:49:28 crc kubenswrapper[4861]: fi Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: sleep 60 & wait ${!} Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-pzmsp_openshift-image-registry(66631e59-ce5c-44de-8a4d-37eb82acf997): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:28 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:28 crc kubenswrapper[4861]: E0310 18:49:28.191130 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-pzmsp" podUID="66631e59-ce5c-44de-8a4d-37eb82acf997" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.288625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.288676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.288692 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.288749 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.288768 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.392078 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.392139 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.392160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.392185 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.392202 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.421268 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pzmsp" event={"ID":"66631e59-ce5c-44de-8a4d-37eb82acf997","Type":"ContainerStarted","Data":"a444dda8354bc6738d1cf1cbdccdaa700ea7248d2a64aec7ed0091a1d1d2dd34"} Mar 10 18:49:28 crc kubenswrapper[4861]: E0310 18:49:28.423105 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:28 crc kubenswrapper[4861]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 10 18:49:28 crc kubenswrapper[4861]: while [ true ]; Mar 10 18:49:28 crc kubenswrapper[4861]: do Mar 10 18:49:28 crc kubenswrapper[4861]: for f in $(ls /tmp/serviceca); do Mar 10 18:49:28 crc kubenswrapper[4861]: echo $f Mar 10 18:49:28 crc kubenswrapper[4861]: ca_file_path="/tmp/serviceca/${f}" Mar 10 18:49:28 crc kubenswrapper[4861]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 10 18:49:28 crc kubenswrapper[4861]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 10 18:49:28 crc kubenswrapper[4861]: if [ -e "${reg_dir_path}" ]; then Mar 10 18:49:28 crc kubenswrapper[4861]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 10 18:49:28 crc kubenswrapper[4861]: else Mar 10 18:49:28 crc kubenswrapper[4861]: mkdir $reg_dir_path Mar 10 18:49:28 crc kubenswrapper[4861]: cp $ca_file_path $reg_dir_path/ca.crt Mar 10 18:49:28 crc kubenswrapper[4861]: fi Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: for d in $(ls /etc/docker/certs.d); do Mar 10 18:49:28 crc kubenswrapper[4861]: echo $d Mar 10 18:49:28 crc kubenswrapper[4861]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 10 18:49:28 crc kubenswrapper[4861]: reg_conf_path="/tmp/serviceca/${dp}" Mar 10 18:49:28 crc kubenswrapper[4861]: if [ ! -e "${reg_conf_path}" ]; then Mar 10 18:49:28 crc kubenswrapper[4861]: rm -rf /etc/docker/certs.d/$d Mar 10 18:49:28 crc kubenswrapper[4861]: fi Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: sleep 60 & wait ${!} Mar 10 18:49:28 crc kubenswrapper[4861]: done Mar 10 18:49:28 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm4t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-pzmsp_openshift-image-registry(66631e59-ce5c-44de-8a4d-37eb82acf997): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:28 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:28 crc kubenswrapper[4861]: E0310 18:49:28.424341 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-pzmsp" podUID="66631e59-ce5c-44de-8a4d-37eb82acf997" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.448779 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.466581 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.482991 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.495267 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.495325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.495343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.495368 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.495387 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.498323 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.512070 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.524950 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.538968 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.550092 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.572601 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.598965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.599038 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.599058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.599083 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.599100 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.600472 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.614561 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.626485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.639753 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.650924 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.701505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.701570 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.701591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.701621 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.701643 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.804268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.804319 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.804341 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.804366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.804385 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.906821 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.906887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.906906 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.906932 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:28 crc kubenswrapper[4861]: I0310 18:49:28.906951 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:28Z","lastTransitionTime":"2026-03-10T18:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.010081 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.010151 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.010175 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.010202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.010223 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.113314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.113462 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.113489 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.113521 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.113545 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.216990 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.217052 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.217071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.217135 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.217152 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.320535 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.320625 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.320645 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.320672 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.320691 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.424375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.424453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.424483 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.424519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.424544 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.527676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.527765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.527783 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.527808 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.527826 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.631413 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.631482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.631502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.631527 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.631546 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.734681 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.734768 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.734786 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.734809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.734830 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.837410 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.837478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.837496 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.837522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.837540 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.940107 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.940160 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.940177 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.940198 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.940214 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:29Z","lastTransitionTime":"2026-03-10T18:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.957844 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.957903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:29 crc kubenswrapper[4861]: I0310 18:49:29.957921 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.958391 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.958552 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.958785 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.960202 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:29 crc kubenswrapper[4861]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:29 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:29 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:29 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:29 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:29 crc kubenswrapper[4861]: fi Mar 10 18:49:29 crc kubenswrapper[4861]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 10 18:49:29 crc kubenswrapper[4861]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 10 18:49:29 crc kubenswrapper[4861]: ho_enable="--enable-hybrid-overlay" Mar 10 18:49:29 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 10 18:49:29 crc kubenswrapper[4861]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 10 18:49:29 crc kubenswrapper[4861]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 10 18:49:29 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:29 crc kubenswrapper[4861]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 10 18:49:29 crc kubenswrapper[4861]: --webhook-host=127.0.0.1 \ Mar 10 18:49:29 crc kubenswrapper[4861]: --webhook-port=9743 \ Mar 10 18:49:29 crc kubenswrapper[4861]: ${ho_enable} \ Mar 10 18:49:29 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:49:29 crc kubenswrapper[4861]: --disable-approver \ Mar 10 18:49:29 crc kubenswrapper[4861]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 10 18:49:29 crc kubenswrapper[4861]: --wait-for-kubernetes-api=200s \ Mar 10 18:49:29 crc kubenswrapper[4861]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 10 18:49:29 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:29 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:29 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.962694 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:29 crc kubenswrapper[4861]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:29 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:29 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:29 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:29 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:29 crc kubenswrapper[4861]: fi Mar 10 18:49:29 crc kubenswrapper[4861]: Mar 10 18:49:29 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 10 18:49:29 crc kubenswrapper[4861]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 10 18:49:29 crc kubenswrapper[4861]: --disable-webhook \ Mar 10 18:49:29 crc kubenswrapper[4861]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 10 18:49:29 crc kubenswrapper[4861]: --loglevel="${LOGLEVEL}" Mar 10 18:49:29 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:29 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:29 crc kubenswrapper[4861]: E0310 18:49:29.963923 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.044162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.044254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.044273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.044337 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.044358 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.147774 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.147841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.147863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.147897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.147920 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.250633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.250765 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.250793 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.250818 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.250836 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.353637 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.353735 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.353758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.353782 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.353800 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.456837 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.456948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.456974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.457014 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.457044 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.560433 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.560539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.560563 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.560595 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.560619 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.663964 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.664039 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.664058 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.664087 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.664110 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.767441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.767504 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.767529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.767554 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.767571 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.870636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.871071 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.871244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.871416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.871560 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.975036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.975109 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.975129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.975154 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:30 crc kubenswrapper[4861]: I0310 18:49:30.975173 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:30Z","lastTransitionTime":"2026-03-10T18:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.077805 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.077875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.077892 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.077917 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.077936 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.181049 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.181123 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.181140 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.181165 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.181183 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.284468 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.284519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.284539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.284561 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.284578 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.388008 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.388065 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.388088 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.388119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.388142 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.490373 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.490441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.490460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.490485 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.490503 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.593365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.593442 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.593460 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.593491 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.593512 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.696575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.696647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.696665 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.696694 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.696758 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.799124 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.799202 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.799226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.799259 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.799282 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.902764 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.902853 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.902877 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.902916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.902941 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:31Z","lastTransitionTime":"2026-03-10T18:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.957370 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.957451 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:31 crc kubenswrapper[4861]: I0310 18:49:31.957510 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:31 crc kubenswrapper[4861]: E0310 18:49:31.958049 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:31 crc kubenswrapper[4861]: E0310 18:49:31.958166 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:31 crc kubenswrapper[4861]: E0310 18:49:31.958304 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:31 crc kubenswrapper[4861]: E0310 18:49:31.959803 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:31 crc kubenswrapper[4861]: E0310 18:49:31.961049 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.005931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.005988 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.006006 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.006034 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.006053 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.109234 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.109325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.109375 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.109400 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.109419 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.212528 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.212623 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.212642 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.212667 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.212752 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.315840 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.315907 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.315924 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.316324 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.316375 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.419664 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.419701 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.419759 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.419779 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.419794 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.522641 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.522696 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.522744 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.522771 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.522791 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.626277 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.626342 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.626360 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.626386 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.626405 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.729338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.729398 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.729416 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.729440 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.729457 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.832302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.832365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.832383 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.832406 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.832423 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.936429 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.936502 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.936519 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.936545 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:32 crc kubenswrapper[4861]: I0310 18:49:32.936565 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:32Z","lastTransitionTime":"2026-03-10T18:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.040555 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.040615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.040632 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.040656 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.040672 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.144268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.144335 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.144357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.144384 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.144405 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.247615 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.247766 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.247787 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.247811 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.247828 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.351176 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.351232 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.351250 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.351273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.351290 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.454583 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.454657 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.454676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.454702 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.454793 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.558026 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.558119 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.558147 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.558180 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.558200 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.642629 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv"] Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.643297 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.646601 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.646645 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.661884 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.662197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.662423 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.662455 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.662478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.662496 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.673539 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.689636 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.704781 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.718664 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.718984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.719048 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.719113 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vh2b\" (UniqueName: \"kubernetes.io/projected/2e4302e6-1187-4371-8523-a96ec8032e74-kube-api-access-5vh2b\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.719189 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e4302e6-1187-4371-8523-a96ec8032e74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.746933 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.761327 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.765926 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.766162 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.766357 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.766563 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.766809 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.775926 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.787390 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.799494 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.813961 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.819737 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.819795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.819852 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vh2b\" (UniqueName: \"kubernetes.io/projected/2e4302e6-1187-4371-8523-a96ec8032e74-kube-api-access-5vh2b\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.819918 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e4302e6-1187-4371-8523-a96ec8032e74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.821454 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.822387 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2e4302e6-1187-4371-8523-a96ec8032e74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.829972 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2e4302e6-1187-4371-8523-a96ec8032e74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.852278 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.862964 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vh2b\" (UniqueName: \"kubernetes.io/projected/2e4302e6-1187-4371-8523-a96ec8032e74-kube-api-access-5vh2b\") pod \"ovnkube-control-plane-749d76644c-rmmgv\" (UID: \"2e4302e6-1187-4371-8523-a96ec8032e74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.868113 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.870025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.870080 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.870098 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.870122 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.870140 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.881775 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.900847 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.957765 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.957867 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.957907 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.957934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.958116 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.958857 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.958910 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.961027 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:33 crc kubenswrapper[4861]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:33 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:33 crc kubenswrapper[4861]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 10 18:49:33 crc kubenswrapper[4861]: source /etc/kubernetes/apiserver-url.env Mar 10 18:49:33 crc kubenswrapper[4861]: else Mar 10 18:49:33 crc kubenswrapper[4861]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 10 18:49:33 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 10 18:49:33 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:33 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.962251 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.962472 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:33 crc kubenswrapper[4861]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 10 18:49:33 crc kubenswrapper[4861]: apiVersion: v1 Mar 10 18:49:33 crc kubenswrapper[4861]: clusters: Mar 10 18:49:33 crc kubenswrapper[4861]: - cluster: Mar 10 18:49:33 crc kubenswrapper[4861]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 10 18:49:33 crc kubenswrapper[4861]: server: https://api-int.crc.testing:6443 Mar 10 18:49:33 crc kubenswrapper[4861]: name: default-cluster Mar 10 18:49:33 crc kubenswrapper[4861]: contexts: Mar 10 18:49:33 crc kubenswrapper[4861]: - context: Mar 10 18:49:33 crc kubenswrapper[4861]: cluster: default-cluster Mar 10 18:49:33 crc kubenswrapper[4861]: namespace: default Mar 10 18:49:33 crc kubenswrapper[4861]: user: default-auth Mar 10 18:49:33 crc kubenswrapper[4861]: name: default-context Mar 10 18:49:33 crc kubenswrapper[4861]: current-context: default-context Mar 10 18:49:33 crc kubenswrapper[4861]: kind: Config Mar 10 18:49:33 crc kubenswrapper[4861]: preferences: {} Mar 10 18:49:33 crc kubenswrapper[4861]: users: Mar 10 18:49:33 crc kubenswrapper[4861]: - name: default-auth Mar 10 18:49:33 crc kubenswrapper[4861]: user: Mar 10 18:49:33 crc kubenswrapper[4861]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:33 crc kubenswrapper[4861]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 10 18:49:33 crc kubenswrapper[4861]: EOF Mar 10 18:49:33 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwtwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:33 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.962488 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:33 crc kubenswrapper[4861]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:33 crc kubenswrapper[4861]: set -uo pipefail Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 10 18:49:33 crc kubenswrapper[4861]: HOSTS_FILE="/etc/hosts" Mar 10 18:49:33 crc kubenswrapper[4861]: TEMP_FILE="/etc/hosts.tmp" Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: # Make a temporary file with the old hosts file's attributes. Mar 10 18:49:33 crc kubenswrapper[4861]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 10 18:49:33 crc kubenswrapper[4861]: echo "Failed to preserve hosts file. Exiting." Mar 10 18:49:33 crc kubenswrapper[4861]: exit 1 Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: while true; do Mar 10 18:49:33 crc kubenswrapper[4861]: declare -A svc_ips Mar 10 18:49:33 crc kubenswrapper[4861]: for svc in "${services[@]}"; do Mar 10 18:49:33 crc kubenswrapper[4861]: # Fetch service IP from cluster dns if present. We make several tries Mar 10 18:49:33 crc kubenswrapper[4861]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 10 18:49:33 crc kubenswrapper[4861]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 10 18:49:33 crc kubenswrapper[4861]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 10 18:49:33 crc kubenswrapper[4861]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:33 crc kubenswrapper[4861]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:33 crc kubenswrapper[4861]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 10 18:49:33 crc kubenswrapper[4861]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 10 18:49:33 crc kubenswrapper[4861]: for i in ${!cmds[*]} Mar 10 18:49:33 crc kubenswrapper[4861]: do Mar 10 18:49:33 crc kubenswrapper[4861]: ips=($(eval "${cmds[i]}")) Mar 10 18:49:33 crc kubenswrapper[4861]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 10 18:49:33 crc kubenswrapper[4861]: svc_ips["${svc}"]="${ips[@]}" Mar 10 18:49:33 crc kubenswrapper[4861]: break Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: # Update /etc/hosts only if we get valid service IPs Mar 10 18:49:33 crc kubenswrapper[4861]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 10 18:49:33 crc kubenswrapper[4861]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 10 18:49:33 crc kubenswrapper[4861]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 10 18:49:33 crc kubenswrapper[4861]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 10 18:49:33 crc kubenswrapper[4861]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 10 18:49:33 crc kubenswrapper[4861]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 10 18:49:33 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:33 crc kubenswrapper[4861]: continue Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: # Append resolver entries for services Mar 10 18:49:33 crc kubenswrapper[4861]: rc=0 Mar 10 18:49:33 crc kubenswrapper[4861]: for svc in "${!svc_ips[@]}"; do Mar 10 18:49:33 crc kubenswrapper[4861]: for ip in ${svc_ips[${svc}]}; do Mar 10 18:49:33 crc kubenswrapper[4861]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: if [[ $rc -ne 0 ]]; then Mar 10 18:49:33 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:33 crc kubenswrapper[4861]: continue Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 10 18:49:33 crc kubenswrapper[4861]: # Replace /etc/hosts with our modified version if needed Mar 10 18:49:33 crc kubenswrapper[4861]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 10 18:49:33 crc kubenswrapper[4861]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: sleep 60 & wait Mar 10 18:49:33 crc kubenswrapper[4861]: unset svc_ips Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5t7pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-b87lw_openshift-dns(e474cdc9-b374-49a6-aece-afa19f8d5ee6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:33 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.962518 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.962846 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:33 crc kubenswrapper[4861]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 10 18:49:33 crc kubenswrapper[4861]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 10 18:49:33 crc kubenswrapper[4861]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2gvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-6lblg_openshift-multus(d1c251f4-6539-4aa1-8979-47e74495aca3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:33 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.963957 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.964047 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-6lblg" podUID="d1c251f4-6539-4aa1-8979-47e74495aca3" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.964093 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-b87lw" podUID="e474cdc9-b374-49a6-aece-afa19f8d5ee6" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.964125 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.965402 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tng72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.966990 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.972457 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.972499 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.972516 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.972536 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:33 crc kubenswrapper[4861]: I0310 18:49:33.972552 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:33Z","lastTransitionTime":"2026-03-10T18:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:33 crc kubenswrapper[4861]: E0310 18:49:33.997903 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:33 crc kubenswrapper[4861]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:33 crc kubenswrapper[4861]: set -euo pipefail Mar 10 18:49:33 crc kubenswrapper[4861]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 10 18:49:33 crc kubenswrapper[4861]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 10 18:49:33 crc kubenswrapper[4861]: # As the secret mount is optional we must wait for the files to be present. Mar 10 18:49:33 crc kubenswrapper[4861]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 10 18:49:33 crc kubenswrapper[4861]: TS=$(date +%s) Mar 10 18:49:33 crc kubenswrapper[4861]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 10 18:49:33 crc kubenswrapper[4861]: HAS_LOGGED_INFO=0 Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: log_missing_certs(){ Mar 10 18:49:33 crc kubenswrapper[4861]: CUR_TS=$(date +%s) Mar 10 18:49:33 crc kubenswrapper[4861]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 10 18:49:33 crc kubenswrapper[4861]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 10 18:49:33 crc kubenswrapper[4861]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 10 18:49:33 crc kubenswrapper[4861]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 10 18:49:33 crc kubenswrapper[4861]: HAS_LOGGED_INFO=1 Mar 10 18:49:33 crc kubenswrapper[4861]: fi Mar 10 18:49:33 crc kubenswrapper[4861]: } Mar 10 18:49:33 crc kubenswrapper[4861]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 10 18:49:33 crc kubenswrapper[4861]: log_missing_certs Mar 10 18:49:33 crc kubenswrapper[4861]: sleep 5 Mar 10 18:49:33 crc kubenswrapper[4861]: done Mar 10 18:49:33 crc kubenswrapper[4861]: Mar 10 18:49:33 crc kubenswrapper[4861]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 10 18:49:33 crc kubenswrapper[4861]: exec /usr/bin/kube-rbac-proxy \ Mar 10 18:49:33 crc kubenswrapper[4861]: --logtostderr \ Mar 10 18:49:33 crc kubenswrapper[4861]: --secure-listen-address=:9108 \ Mar 10 18:49:33 crc kubenswrapper[4861]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 10 18:49:33 crc kubenswrapper[4861]: --upstream=http://127.0.0.1:29108/ \ Mar 10 18:49:33 crc kubenswrapper[4861]: --tls-private-key-file=${TLS_PK} \ Mar 10 18:49:33 crc kubenswrapper[4861]: --tls-cert-file=${TLS_CERT} Mar 10 18:49:33 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vh2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-rmmgv_openshift-ovn-kubernetes(2e4302e6-1187-4371-8523-a96ec8032e74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:33 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.000502 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:34 crc kubenswrapper[4861]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:34 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:34 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_join_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_join_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_transit_switch_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_transit_switch_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: dns_name_resolver_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "false" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: persistent_ips_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "true" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: # This is needed so that converting clusters from GA to TP Mar 10 18:49:34 crc kubenswrapper[4861]: # will rollout control plane pods as well Mar 10 18:49:34 crc kubenswrapper[4861]: network_segmentation_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: multi_network_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "true" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: multi_network_enabled_flag="--enable-multi-network" Mar 10 18:49:34 crc kubenswrapper[4861]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 10 18:49:34 crc kubenswrapper[4861]: exec /usr/bin/ovnkube \ Mar 10 18:49:34 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:49:34 crc kubenswrapper[4861]: --init-cluster-manager "${K8S_NODE}" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 10 18:49:34 crc kubenswrapper[4861]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-bind-address "127.0.0.1:29108" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-enable-pprof \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-enable-config-duration \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v4_join_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v6_join_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${dns_name_resolver_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${persistent_ips_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${multi_network_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${network_segmentation_enabled_flag} Mar 10 18:49:34 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vh2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-rmmgv_openshift-ovn-kubernetes(2e4302e6-1187-4371-8523-a96ec8032e74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:34 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.001782 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" podUID="2e4302e6-1187-4371-8523-a96ec8032e74" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.076419 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.076533 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.076575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.076609 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.076633 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.181447 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.181500 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.181518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.181540 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.181557 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.284813 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.284874 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.284891 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.284917 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.284934 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.371695 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2rvxn"] Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.372372 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.372460 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.387729 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.387791 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.387810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.387834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.387852 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.389442 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.405294 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.420254 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.428426 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjz9\" (UniqueName: \"kubernetes.io/projected/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-kube-api-access-7cjz9\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.428494 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.440373 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.442306 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.442865 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.443540 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" event={"ID":"2e4302e6-1187-4371-8523-a96ec8032e74","Type":"ContainerStarted","Data":"d3a9a4d688f77dd39e59778874dffc567204dcb5115155a946ed004af804045d"} Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.445515 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:34 crc kubenswrapper[4861]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 10 18:49:34 crc kubenswrapper[4861]: set -euo pipefail Mar 10 18:49:34 crc kubenswrapper[4861]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 10 18:49:34 crc kubenswrapper[4861]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 10 18:49:34 crc kubenswrapper[4861]: # As the secret mount is optional we must wait for the files to be present. Mar 10 18:49:34 crc kubenswrapper[4861]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 10 18:49:34 crc kubenswrapper[4861]: TS=$(date +%s) Mar 10 18:49:34 crc kubenswrapper[4861]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 10 18:49:34 crc kubenswrapper[4861]: HAS_LOGGED_INFO=0 Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: log_missing_certs(){ Mar 10 18:49:34 crc kubenswrapper[4861]: CUR_TS=$(date +%s) Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 10 18:49:34 crc kubenswrapper[4861]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 10 18:49:34 crc kubenswrapper[4861]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 10 18:49:34 crc kubenswrapper[4861]: HAS_LOGGED_INFO=1 Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: } Mar 10 18:49:34 crc kubenswrapper[4861]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 10 18:49:34 crc kubenswrapper[4861]: log_missing_certs Mar 10 18:49:34 crc kubenswrapper[4861]: sleep 5 Mar 10 18:49:34 crc kubenswrapper[4861]: done Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 10 18:49:34 crc kubenswrapper[4861]: exec /usr/bin/kube-rbac-proxy \ Mar 10 18:49:34 crc kubenswrapper[4861]: --logtostderr \ Mar 10 18:49:34 crc kubenswrapper[4861]: --secure-listen-address=:9108 \ Mar 10 18:49:34 crc kubenswrapper[4861]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 10 18:49:34 crc kubenswrapper[4861]: --upstream=http://127.0.0.1:29108/ \ Mar 10 18:49:34 crc kubenswrapper[4861]: --tls-private-key-file=${TLS_PK} \ Mar 10 18:49:34 crc kubenswrapper[4861]: --tls-cert-file=${TLS_CERT} Mar 10 18:49:34 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vh2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-rmmgv_openshift-ovn-kubernetes(2e4302e6-1187-4371-8523-a96ec8032e74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:34 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.446369 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.448076 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:49:34 crc kubenswrapper[4861]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ -f "/env/_master" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: set -o allexport Mar 10 18:49:34 crc kubenswrapper[4861]: source "/env/_master" Mar 10 18:49:34 crc kubenswrapper[4861]: set +o allexport Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_join_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_join_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_transit_switch_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_transit_switch_subnet_opt= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "" != "" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: dns_name_resolver_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "false" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: persistent_ips_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "true" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: # This is needed so that converting clusters from GA to TP Mar 10 18:49:34 crc kubenswrapper[4861]: # will rollout control plane pods as well Mar 10 18:49:34 crc kubenswrapper[4861]: network_segmentation_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: multi_network_enabled_flag= Mar 10 18:49:34 crc kubenswrapper[4861]: if [[ "true" == "true" ]]; then Mar 10 18:49:34 crc kubenswrapper[4861]: multi_network_enabled_flag="--enable-multi-network" Mar 10 18:49:34 crc kubenswrapper[4861]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 10 18:49:34 crc kubenswrapper[4861]: fi Mar 10 18:49:34 crc kubenswrapper[4861]: Mar 10 18:49:34 crc kubenswrapper[4861]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 10 18:49:34 crc kubenswrapper[4861]: exec /usr/bin/ovnkube \ Mar 10 18:49:34 crc kubenswrapper[4861]: --enable-interconnect \ Mar 10 18:49:34 crc kubenswrapper[4861]: --init-cluster-manager "${K8S_NODE}" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 10 18:49:34 crc kubenswrapper[4861]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-bind-address "127.0.0.1:29108" \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-enable-pprof \ Mar 10 18:49:34 crc kubenswrapper[4861]: --metrics-enable-config-duration \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v4_join_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v6_join_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${dns_name_resolver_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${persistent_ips_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${multi_network_enabled_flag} \ Mar 10 18:49:34 crc kubenswrapper[4861]: ${network_segmentation_enabled_flag} Mar 10 18:49:34 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vh2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-rmmgv_openshift-ovn-kubernetes(2e4302e6-1187-4371-8523-a96ec8032e74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 10 18:49:34 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.449273 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" podUID="2e4302e6-1187-4371-8523-a96ec8032e74" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.457319 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.470890 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.483010 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.490769 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.490861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.490880 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.490936 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.490955 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.497112 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.506263 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.516610 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.530180 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.530234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cjz9\" (UniqueName: \"kubernetes.io/projected/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-kube-api-access-7cjz9\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.530414 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:34 crc kubenswrapper[4861]: E0310 18:49:34.530498 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:35.03048057 +0000 UTC m=+118.793916530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.545015 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.561028 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.561442 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cjz9\" (UniqueName: \"kubernetes.io/projected/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-kube-api-access-7cjz9\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.573226 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.589620 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.593633 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.593682 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.593700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.593763 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.593788 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.599551 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.606893 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.616170 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.624130 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.631303 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.644746 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.656230 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.667379 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.687324 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.697164 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.697244 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.697268 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.697298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.697323 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.702188 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.716019 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.731076 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.741260 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.753580 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.775869 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.789403 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.799945 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.799982 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.799993 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.800009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.800021 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.803851 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.820685 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.849890 4861 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.902651 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.902700 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.902732 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.902749 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:34 crc kubenswrapper[4861]: I0310 18:49:34.902762 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:34Z","lastTransitionTime":"2026-03-10T18:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.005257 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.005309 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.005321 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.005339 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.005352 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.047191 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.047425 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.047506 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:36.047484133 +0000 UTC m=+119.810920193 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.109226 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.109290 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.109308 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.109374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.109395 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.177085 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.177149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.177168 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.177193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.177210 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.212938 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.221394 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.221459 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.221478 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.221508 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.221529 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.248058 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.252451 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.252507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.252522 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.252539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.252551 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.262100 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.266096 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.266140 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.266156 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.266178 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.266192 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.278648 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.282076 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.282099 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.282129 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.282146 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.282159 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.290867 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.291015 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.292230 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.292279 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.292292 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.292307 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.292316 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.395051 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.395097 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.395114 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.395157 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.395175 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.498092 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.498132 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.498149 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.498169 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.498185 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.602809 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.602858 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.602875 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.602897 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.602911 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.706282 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.706327 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.706343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.706365 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.706382 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.809326 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.809425 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.809445 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.809469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.809487 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.912846 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.912893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.912909 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.912931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.912971 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:35Z","lastTransitionTime":"2026-03-10T18:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.958677 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.958745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.958757 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:35 crc kubenswrapper[4861]: I0310 18:49:35.958882 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.959075 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.959236 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.959279 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:35 crc kubenswrapper[4861]: E0310 18:49:35.959374 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.016927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.016972 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.016988 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.017009 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.017026 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.059598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:36 crc kubenswrapper[4861]: E0310 18:49:36.059863 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:36 crc kubenswrapper[4861]: E0310 18:49:36.059939 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:38.059918815 +0000 UTC m=+121.823354805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.119252 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.119288 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.119298 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.119313 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.119324 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.221849 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.222484 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.222511 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.222535 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.222571 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.326143 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.326197 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.326215 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.326237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.326255 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.429408 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.429453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.429469 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.429492 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.429508 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.451533 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerStarted","Data":"02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.474155 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.491004 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.502982 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.516971 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.531760 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.531829 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.531855 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.531887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.531912 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.533703 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.549016 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.561445 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.571534 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.580089 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.593258 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.625177 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.634827 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.634876 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.634892 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.634916 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.634934 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.640310 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.653160 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.672130 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.687808 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.699888 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.737112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.737203 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.737221 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.737277 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.737297 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.840852 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.840928 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.840947 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.840974 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.840993 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:36Z","lastTransitionTime":"2026-03-10T18:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:36 crc kubenswrapper[4861]: E0310 18:49:36.941356 4861 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.971455 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 10 18:49:36 crc kubenswrapper[4861]: I0310 18:49:36.990476 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.005962 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.019264 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.037468 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.104249 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: E0310 18:49:37.104504 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.115821 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.131615 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.147015 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.160629 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.186894 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.198782 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.212967 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.224897 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.239237 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.251009 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.263889 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.457847 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124" exitCode=0 Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.457912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124"} Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.472484 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.494420 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.513967 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.526926 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.553943 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.568696 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.582412 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.595510 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.607087 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.620983 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.646224 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.660431 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.674191 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.691589 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.708500 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.720305 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.728233 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.958184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.958244 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.958256 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:37 crc kubenswrapper[4861]: I0310 18:49:37.958210 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:37 crc kubenswrapper[4861]: E0310 18:49:37.958352 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:37 crc kubenswrapper[4861]: E0310 18:49:37.958479 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:37 crc kubenswrapper[4861]: E0310 18:49:37.958635 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:37 crc kubenswrapper[4861]: E0310 18:49:37.958816 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.111130 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:38 crc kubenswrapper[4861]: E0310 18:49:38.111381 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:38 crc kubenswrapper[4861]: E0310 18:49:38.111502 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:42.111471821 +0000 UTC m=+125.874907821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.463772 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988" exitCode=0 Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.463817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988"} Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.482090 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.501191 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.516345 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.543948 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.556412 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.570961 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.584241 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.599221 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.608924 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.620510 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.646889 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.661419 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.672198 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.684817 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.700664 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.718186 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:38 crc kubenswrapper[4861]: I0310 18:49:38.729318 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.470619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pzmsp" event={"ID":"66631e59-ce5c-44de-8a4d-37eb82acf997","Type":"ContainerStarted","Data":"a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2"} Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.474118 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d" exitCode=0 Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.474229 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d"} Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.495426 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.523513 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.541144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.554640 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.566037 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.577743 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.585247 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.609229 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.620568 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.634232 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.645923 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.660482 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.672092 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.683968 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.693967 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.706830 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.716463 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.724935 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.742009 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.751928 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.772077 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.784805 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.795181 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.805569 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.815261 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.824297 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.833860 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.842374 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.856538 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.875766 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.885877 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.895375 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.904752 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.912274 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.957995 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.958034 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.958019 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:39 crc kubenswrapper[4861]: I0310 18:49:39.958074 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:39 crc kubenswrapper[4861]: E0310 18:49:39.958205 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:39 crc kubenswrapper[4861]: E0310 18:49:39.958282 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:39 crc kubenswrapper[4861]: E0310 18:49:39.958358 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:39 crc kubenswrapper[4861]: E0310 18:49:39.958410 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.480880 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732" exitCode=0 Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.480954 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732"} Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.495788 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.510504 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.529188 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.544482 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.565086 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.580776 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.595176 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.610864 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.621984 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.631981 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.657886 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.671028 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.683328 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.699937 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.714018 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.727764 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:40 crc kubenswrapper[4861]: I0310 18:49:40.737072 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.489174 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af" exitCode=0 Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.489260 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af"} Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.506137 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.522238 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.546207 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.558266 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.576114 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.596653 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.608080 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.621539 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.637007 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.649862 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.665330 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.677284 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.695104 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.722596 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.735120 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.749995 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.763481 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.957767 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.957815 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.957775 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:41 crc kubenswrapper[4861]: E0310 18:49:41.957941 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:41 crc kubenswrapper[4861]: I0310 18:49:41.958038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:41 crc kubenswrapper[4861]: E0310 18:49:41.958134 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:41 crc kubenswrapper[4861]: E0310 18:49:41.958055 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:41 crc kubenswrapper[4861]: E0310 18:49:41.958323 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:42 crc kubenswrapper[4861]: E0310 18:49:42.106141 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.154012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:42 crc kubenswrapper[4861]: E0310 18:49:42.154213 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:42 crc kubenswrapper[4861]: E0310 18:49:42.154293 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:49:50.154271399 +0000 UTC m=+133.917707389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.499420 4861 generic.go:334] "Generic (PLEG): container finished" podID="391f4bfa-b94c-4b25-8f06-a2f19f912194" containerID="2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066" exitCode=0 Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.499539 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerDied","Data":"2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066"} Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.518921 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.548334 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.561571 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.581126 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.601396 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.613821 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.626485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.642896 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.656156 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.670468 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.683039 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.702816 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.732351 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.746452 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.757813 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.769581 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:42 crc kubenswrapper[4861]: I0310 18:49:42.778489 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.508852 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" event={"ID":"391f4bfa-b94c-4b25-8f06-a2f19f912194","Type":"ContainerStarted","Data":"bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1"} Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.527562 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.542937 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.557479 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.586551 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.598488 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.613425 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.626186 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.640649 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.650404 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.662774 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.688797 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.703596 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.719027 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.738909 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.753794 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.768610 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.779860 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.957640 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.957657 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:43 crc kubenswrapper[4861]: E0310 18:49:43.958079 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.957866 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:43 crc kubenswrapper[4861]: E0310 18:49:43.958610 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:43 crc kubenswrapper[4861]: E0310 18:49:43.958261 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:43 crc kubenswrapper[4861]: I0310 18:49:43.957800 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:43 crc kubenswrapper[4861]: E0310 18:49:43.959170 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.498413 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.498498 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.498518 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.498543 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.498562 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:45Z","lastTransitionTime":"2026-03-10T18:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.516633 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.521061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3"} Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.521143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d"} Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.524810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.524868 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.524893 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.524922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.524945 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:45Z","lastTransitionTime":"2026-03-10T18:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.545986 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.546505 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.552369 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.552428 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.552446 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.552470 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.552500 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:45Z","lastTransitionTime":"2026-03-10T18:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.579350 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.596102 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.598383 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.601889 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.601943 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.601961 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.601988 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.602007 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:45Z","lastTransitionTime":"2026-03-10T18:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.634425 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.637407 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.638630 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.638650 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.638660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.638674 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.638683 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:45Z","lastTransitionTime":"2026-03-10T18:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.645313 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.654012 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.654117 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.656269 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.672903 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.685201 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.700321 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.714335 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.730352 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.743780 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.755154 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.776648 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.803039 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.819820 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.837920 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.959111 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.959249 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.960204 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.960330 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.961464 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.961574 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:45 crc kubenswrapper[4861]: I0310 18:49:45.961782 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:45 crc kubenswrapper[4861]: E0310 18:49:45.962218 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.526826 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerStarted","Data":"3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.530960 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-b87lw" event={"ID":"e474cdc9-b374-49a6-aece-afa19f8d5ee6","Type":"ContainerStarted","Data":"1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.534813 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" event={"ID":"2e4302e6-1187-4371-8523-a96ec8032e74","Type":"ContainerStarted","Data":"9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.537137 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.537183 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.539263 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" exitCode=0 Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.539315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656"} Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.543935 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.568102 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.597863 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.620062 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.643092 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.662441 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.680291 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.703257 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.741927 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.759834 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.782586 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.801066 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.816443 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.830195 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.845428 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.861987 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.883887 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.905424 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.942780 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.965091 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:46 crc kubenswrapper[4861]: I0310 18:49:46.987187 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.000007 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.014993 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.034673 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.052153 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.066773 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.079194 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.102099 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.113202 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: E0310 18:49:47.115270 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.125144 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.140949 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.158128 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.170286 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.186956 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.201113 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.217225 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.234301 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.255873 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.276241 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.291182 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.309616 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.323971 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.338227 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.357098 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.368971 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.382624 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.401310 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.413063 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.424263 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.440633 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.455694 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.548380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.548457 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.548476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.548494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.548511 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.551571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" event={"ID":"2e4302e6-1187-4371-8523-a96ec8032e74","Type":"ContainerStarted","Data":"77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e"} Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.958043 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.958073 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.958087 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:47 crc kubenswrapper[4861]: E0310 18:49:47.958646 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:47 crc kubenswrapper[4861]: E0310 18:49:47.958418 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:47 crc kubenswrapper[4861]: I0310 18:49:47.958181 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:47 crc kubenswrapper[4861]: E0310 18:49:47.958884 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:47 crc kubenswrapper[4861]: E0310 18:49:47.958957 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:48 crc kubenswrapper[4861]: I0310 18:49:48.560781 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773"} Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.569042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac"} Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.574038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602"} Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.594322 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.614388 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.636381 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.653959 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.672400 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.705671 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.725979 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.743145 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.766378 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.786020 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.806661 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.824539 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.847621 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.869441 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.895380 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.920129 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.938058 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.957825 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.957894 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.957950 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.957825 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:49 crc kubenswrapper[4861]: E0310 18:49:49.957994 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:49 crc kubenswrapper[4861]: E0310 18:49:49.958121 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:49 crc kubenswrapper[4861]: E0310 18:49:49.958409 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:49 crc kubenswrapper[4861]: E0310 18:49:49.958635 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.963119 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.971822 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 10 18:49:49 crc kubenswrapper[4861]: I0310 18:49:49.986234 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:49Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.004463 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.015416 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.028942 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.056213 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.070693 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.083209 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.097990 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.114504 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.128636 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.137038 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.155400 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.168909 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.184395 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.211392 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.225098 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:50Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.248800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:50 crc kubenswrapper[4861]: E0310 18:49:50.248989 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:50 crc kubenswrapper[4861]: E0310 18:49:50.249066 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:50:06.24904488 +0000 UTC m=+150.012480870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:49:50 crc kubenswrapper[4861]: I0310 18:49:50.582210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20"} Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.604552 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.637322 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.657460 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.677162 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.696952 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.715151 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.734489 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.767957 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.788492 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.808059 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.832238 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.853251 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.877776 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.891993 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.914021 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.934058 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.953679 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.958080 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.958157 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:51 crc kubenswrapper[4861]: E0310 18:49:51.958235 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.958092 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.958304 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:51 crc kubenswrapper[4861]: E0310 18:49:51.958371 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:51 crc kubenswrapper[4861]: E0310 18:49:51.958466 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:51 crc kubenswrapper[4861]: E0310 18:49:51.958555 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:51 crc kubenswrapper[4861]: I0310 18:49:51.989886 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:51Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.004037 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: E0310 18:49:52.117090 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.598549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0"} Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.599102 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.619751 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.636350 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.640638 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.656067 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.678254 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.698962 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.719309 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.755700 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.777164 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.794240 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.817022 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.834661 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.853383 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.872470 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.906822 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.924956 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.942660 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.964929 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:52 crc kubenswrapper[4861]: I0310 18:49:52.984628 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:52Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.002518 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.025352 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.045929 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.062147 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.079283 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.097078 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.128420 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.149738 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.171587 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.198604 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.219469 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.238793 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.254288 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.276940 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.297740 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.316355 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.346770 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.363016 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.603163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.603226 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.635689 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.653871 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.668412 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.676804 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.690044 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.708037 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.724343 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.737519 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.764866 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.782334 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.798008 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.817477 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.833046 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.857996 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.873643 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.906350 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.919864 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.940228 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.957633 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.957705 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.957766 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.957727 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:53 crc kubenswrapper[4861]: E0310 18:49:53.957825 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:53 crc kubenswrapper[4861]: E0310 18:49:53.957979 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:53 crc kubenswrapper[4861]: E0310 18:49:53.958056 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:53 crc kubenswrapper[4861]: E0310 18:49:53.958196 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:53 crc kubenswrapper[4861]: I0310 18:49:53.966901 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:53Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.612908 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/0.log" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.617089 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0" exitCode=1 Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.617155 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.618131 4861 scope.go:117] "RemoveContainer" containerID="745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.639928 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.659478 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.682804 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.700523 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.717960 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.735641 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.769802 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.794788 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.812652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.812698 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.812751 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.812778 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.812799 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:55Z","lastTransitionTime":"2026-03-10T18:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.816107 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.831808 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.836340 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.836399 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.836417 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.836441 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.836461 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:55Z","lastTransitionTime":"2026-03-10T18:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.851367 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.857050 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.864063 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.864112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.864134 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.864159 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.864178 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:55Z","lastTransitionTime":"2026-03-10T18:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.871376 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.882787 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.889606 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.890922 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.890954 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.890966 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.890985 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.890996 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:55Z","lastTransitionTime":"2026-03-10T18:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.905358 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.909291 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.916873 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.916935 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.916955 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.916980 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.916997 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:49:55Z","lastTransitionTime":"2026-03-10T18:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.926544 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.943185 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.943420 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.946548 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.957742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.957830 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.957909 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.957955 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.958172 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.958154 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.958263 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:55 crc kubenswrapper[4861]: E0310 18:49:55.958351 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.964874 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:55 crc kubenswrapper[4861]: I0310 18:49:55.994982 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:54Z\\\",\\\"message\\\":\\\"7\\\\nI0310 18:49:54.900530 6839 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900565 6839 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0310 18:49:54.900595 6839 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900689 6839 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900789 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901040 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901287 6839 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:55Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.008676 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.623213 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/0.log" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.626818 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628"} Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.627497 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.648580 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.666012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.685978 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.699978 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.722524 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.751873 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.778364 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.801228 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.814457 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.835353 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.852995 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.874527 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.889136 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.906253 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.924502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.942788 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.964947 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:54Z\\\",\\\"message\\\":\\\"7\\\\nI0310 18:49:54.900530 6839 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900565 6839 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0310 18:49:54.900595 6839 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900689 6839 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900789 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901040 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901287 6839 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:56 crc kubenswrapper[4861]: I0310 18:49:56.982125 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.010015 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.033684 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.058371 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.085518 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:54Z\\\",\\\"message\\\":\\\"7\\\\nI0310 18:49:54.900530 6839 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900565 6839 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0310 18:49:54.900595 6839 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900689 6839 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900789 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901040 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901287 6839 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.101385 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.117951 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.124287 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.141919 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.163219 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.180983 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.195676 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.209471 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.229283 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.249330 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.265496 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.282135 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.300164 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.317234 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.333671 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.633956 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/1.log" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.635515 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/0.log" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.639549 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628" exitCode=1 Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.639604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628"} Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.639651 4861 scope.go:117] "RemoveContainer" containerID="745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.641192 4861 scope.go:117] "RemoveContainer" containerID="e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.641439 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.665248 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.699594 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.719878 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.740592 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.765280 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.788976 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.811500 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.828799 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.848284 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.871204 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.892417 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.931380 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.957915 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.958012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.958054 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.958070 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.957984 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.958163 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.958479 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:57 crc kubenswrapper[4861]: E0310 18:49:57.958371 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.970835 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://745f532b03dad251e58eac660fb1381069e051fdf99d2797df098ce130bb5ea0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:54Z\\\",\\\"message\\\":\\\"7\\\\nI0310 18:49:54.900530 6839 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900565 6839 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0310 18:49:54.900595 6839 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900689 6839 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:54.900789 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901040 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:49:54.901287 6839 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:56Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771330 6959 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771547 6959 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:56.772300 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0310 18:49:56.772334 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0310 18:49:56.772341 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0310 18:49:56.772383 6959 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0310 18:49:56.772389 6959 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0310 18:49:56.772403 6959 factory.go:656] Stopping watch factory\\\\nI0310 18:49:56.772421 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0310 18:49:56.772431 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0310 18:49:56.772439 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI0310 18:49:56.772447 6959 handler.go:208] Removed *v1.Node event handler 7\\\\nI0310 18:49:56.772455 6959 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:57 crc kubenswrapper[4861]: I0310 18:49:57.991379 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.004602 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.019394 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.032846 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.045216 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.645377 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/1.log" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.650338 4861 scope.go:117] "RemoveContainer" containerID="e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628" Mar 10 18:49:58 crc kubenswrapper[4861]: E0310 18:49:58.650586 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.679184 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:56Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771330 6959 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771547 6959 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:56.772300 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0310 18:49:56.772334 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0310 18:49:56.772341 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0310 18:49:56.772383 6959 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0310 18:49:56.772389 6959 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0310 18:49:56.772403 6959 factory.go:656] Stopping watch factory\\\\nI0310 18:49:56.772421 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0310 18:49:56.772431 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0310 18:49:56.772439 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI0310 18:49:56.772447 6959 handler.go:208] Removed *v1.Node event handler 7\\\\nI0310 18:49:56.772455 6959 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.697359 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.717007 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.733874 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.753479 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.771948 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.792631 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.812264 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.832944 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.847868 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.871316 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.888603 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.918967 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.939315 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.955293 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.975560 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:58 crc kubenswrapper[4861]: I0310 18:49:58.994439 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:58Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.009509 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:49:59Z is after 2025-08-24T17:21:41Z" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.857420 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.857588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.857769 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.857845 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:03.857822531 +0000 UTC m=+207.621258531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.858082 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:03.858039595 +0000 UTC m=+207.621475595 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958005 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958102 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958226 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958232 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958353 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958415 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958449 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958496 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958567 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:49:59 crc kubenswrapper[4861]: I0310 18:49:59.958628 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958704 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958811 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:03.95879067 +0000 UTC m=+207.722226670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958566 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958852 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958879 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958891 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958933 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958899 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.958955 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.959025 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:03.959002334 +0000 UTC m=+207.722438334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:49:59 crc kubenswrapper[4861]: E0310 18:49:59.959054 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:03.959040615 +0000 UTC m=+207.722476615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:50:01 crc kubenswrapper[4861]: I0310 18:50:01.957651 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:01 crc kubenswrapper[4861]: I0310 18:50:01.957745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:01 crc kubenswrapper[4861]: I0310 18:50:01.957757 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:01 crc kubenswrapper[4861]: I0310 18:50:01.957662 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:01 crc kubenswrapper[4861]: E0310 18:50:01.957899 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:01 crc kubenswrapper[4861]: E0310 18:50:01.958039 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:01 crc kubenswrapper[4861]: E0310 18:50:01.958298 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:01 crc kubenswrapper[4861]: E0310 18:50:01.958507 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:02 crc kubenswrapper[4861]: E0310 18:50:02.119837 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:03 crc kubenswrapper[4861]: I0310 18:50:03.957592 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:03 crc kubenswrapper[4861]: I0310 18:50:03.957647 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:03 crc kubenswrapper[4861]: I0310 18:50:03.957652 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:03 crc kubenswrapper[4861]: I0310 18:50:03.957606 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:03 crc kubenswrapper[4861]: E0310 18:50:03.957832 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:03 crc kubenswrapper[4861]: E0310 18:50:03.957971 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:03 crc kubenswrapper[4861]: E0310 18:50:03.958088 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:03 crc kubenswrapper[4861]: E0310 18:50:03.958172 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:05 crc kubenswrapper[4861]: I0310 18:50:05.957083 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:05 crc kubenswrapper[4861]: I0310 18:50:05.957132 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:05 crc kubenswrapper[4861]: I0310 18:50:05.957104 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:05 crc kubenswrapper[4861]: I0310 18:50:05.957079 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:05 crc kubenswrapper[4861]: E0310 18:50:05.957258 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:05 crc kubenswrapper[4861]: E0310 18:50:05.957381 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:05 crc kubenswrapper[4861]: E0310 18:50:05.957476 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:05 crc kubenswrapper[4861]: E0310 18:50:05.957550 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.240676 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.240785 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.240810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.240838 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.240863 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:06Z","lastTransitionTime":"2026-03-10T18:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.262528 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.267439 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.267488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.267505 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.267529 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.267546 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:06Z","lastTransitionTime":"2026-03-10T18:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.287365 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.291575 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.291636 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.291660 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.291688 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.291740 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:06Z","lastTransitionTime":"2026-03-10T18:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.314497 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.319931 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.319984 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.320001 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.320025 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.320042 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:06Z","lastTransitionTime":"2026-03-10T18:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.333330 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.333684 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.333952 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:50:38.333905524 +0000 UTC m=+182.097341614 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.342093 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.348543 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.348600 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.348618 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.348647 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.348667 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:06Z","lastTransitionTime":"2026-03-10T18:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.369020 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:06 crc kubenswrapper[4861]: E0310 18:50:06.369327 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.979669 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 10 18:50:06 crc kubenswrapper[4861]: I0310 18:50:06.992880 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:56Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771330 6959 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771547 6959 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:56.772300 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0310 18:49:56.772334 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0310 18:49:56.772341 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0310 18:49:56.772383 6959 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0310 18:49:56.772389 6959 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0310 18:49:56.772403 6959 factory.go:656] Stopping watch factory\\\\nI0310 18:49:56.772421 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0310 18:49:56.772431 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0310 18:49:56.772439 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI0310 18:49:56.772447 6959 handler.go:208] Removed *v1.Node event handler 7\\\\nI0310 18:49:56.772455 6959 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:06Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.008106 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.031005 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.055930 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.074225 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.091969 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.111450 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: E0310 18:50:07.121073 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.130374 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.150399 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.165991 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.189355 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.206939 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.227586 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.243111 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.259363 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.279300 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.297458 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.312086 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:07Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.957598 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.957668 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:07 crc kubenswrapper[4861]: E0310 18:50:07.958431 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.957769 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:07 crc kubenswrapper[4861]: E0310 18:50:07.958498 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:07 crc kubenswrapper[4861]: I0310 18:50:07.957676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:07 crc kubenswrapper[4861]: E0310 18:50:07.958583 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:07 crc kubenswrapper[4861]: E0310 18:50:07.958703 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:09 crc kubenswrapper[4861]: I0310 18:50:09.957470 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:09 crc kubenswrapper[4861]: I0310 18:50:09.957487 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:09 crc kubenswrapper[4861]: I0310 18:50:09.957564 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:09 crc kubenswrapper[4861]: I0310 18:50:09.957488 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:09 crc kubenswrapper[4861]: E0310 18:50:09.957634 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:09 crc kubenswrapper[4861]: E0310 18:50:09.957775 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:09 crc kubenswrapper[4861]: E0310 18:50:09.957864 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:09 crc kubenswrapper[4861]: E0310 18:50:09.957993 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:11 crc kubenswrapper[4861]: I0310 18:50:11.957917 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:11 crc kubenswrapper[4861]: I0310 18:50:11.957986 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:11 crc kubenswrapper[4861]: E0310 18:50:11.959088 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:11 crc kubenswrapper[4861]: I0310 18:50:11.958029 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:11 crc kubenswrapper[4861]: E0310 18:50:11.959147 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:11 crc kubenswrapper[4861]: I0310 18:50:11.958017 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:11 crc kubenswrapper[4861]: I0310 18:50:11.959217 4861 scope.go:117] "RemoveContainer" containerID="e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628" Mar 10 18:50:11 crc kubenswrapper[4861]: E0310 18:50:11.959246 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:11 crc kubenswrapper[4861]: E0310 18:50:11.959403 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:12 crc kubenswrapper[4861]: E0310 18:50:12.122394 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.709603 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/1.log" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.712613 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8"} Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.713903 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.736200 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.755228 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.768573 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.792494 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:56Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771330 6959 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771547 6959 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:56.772300 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0310 18:49:56.772334 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0310 18:49:56.772341 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0310 18:49:56.772383 6959 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0310 18:49:56.772389 6959 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0310 18:49:56.772403 6959 factory.go:656] Stopping watch factory\\\\nI0310 18:49:56.772421 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0310 18:49:56.772431 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0310 18:49:56.772439 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI0310 18:49:56.772447 6959 handler.go:208] Removed *v1.Node event handler 7\\\\nI0310 18:49:56.772455 6959 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.806388 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.818534 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.830679 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.844145 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.854888 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.871695 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.891034 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.913294 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.929083 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.945463 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.962702 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:12 crc kubenswrapper[4861]: I0310 18:50:12.986408 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:12Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.004062 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.018250 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.037236 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.719842 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/2.log" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.720702 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/1.log" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.725112 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" exitCode=1 Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.725166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8"} Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.725215 4861 scope.go:117] "RemoveContainer" containerID="e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.726955 4861 scope.go:117] "RemoveContainer" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" Mar 10 18:50:13 crc kubenswrapper[4861]: E0310 18:50:13.728167 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.753285 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.773748 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.791955 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.812220 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.828098 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.850489 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.868542 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.899559 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.918576 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.937023 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.956664 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.957209 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.957361 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.957641 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.957914 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:13 crc kubenswrapper[4861]: E0310 18:50:13.957912 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:13 crc kubenswrapper[4861]: E0310 18:50:13.958017 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:13 crc kubenswrapper[4861]: E0310 18:50:13.958817 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:13 crc kubenswrapper[4861]: E0310 18:50:13.959169 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.975630 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:13 crc kubenswrapper[4861]: I0310 18:50:13.990541 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:13Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.021306 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5ab01d21c22678fbaad3bec98312cca6e30cf9ca8d37da3636dda56304f2628\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:49:56Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771330 6959 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0310 18:49:56.771547 6959 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:49:56.772300 6959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0310 18:49:56.772334 6959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0310 18:49:56.772341 6959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0310 18:49:56.772383 6959 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0310 18:49:56.772389 6959 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0310 18:49:56.772403 6959 factory.go:656] Stopping watch factory\\\\nI0310 18:49:56.772421 6959 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0310 18:49:56.772431 6959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0310 18:49:56.772439 6959 handler.go:208] Removed *v1.Node event handler 2\\\\nI0310 18:49:56.772447 6959 handler.go:208] Removed *v1.Node event handler 7\\\\nI0310 18:49:56.772455 6959 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.038327 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.054417 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.077524 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.100893 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.121467 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.731225 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/2.log" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.736621 4861 scope.go:117] "RemoveContainer" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" Mar 10 18:50:14 crc kubenswrapper[4861]: E0310 18:50:14.736946 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.755532 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.774920 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.790233 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.808264 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.827141 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.859878 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.880333 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.897099 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.919180 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.936897 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.956487 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.974197 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:14 crc kubenswrapper[4861]: I0310 18:50:14.996389 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:14Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.017655 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.037444 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.056578 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.089114 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.106045 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.122344 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:15Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.957425 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.957564 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:15 crc kubenswrapper[4861]: E0310 18:50:15.957626 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:15 crc kubenswrapper[4861]: E0310 18:50:15.957821 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.957924 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:15 crc kubenswrapper[4861]: E0310 18:50:15.958010 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:15 crc kubenswrapper[4861]: I0310 18:50:15.958215 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:15 crc kubenswrapper[4861]: E0310 18:50:15.958510 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.421345 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.421408 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.421427 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.421453 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.421472 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:16Z","lastTransitionTime":"2026-03-10T18:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.441494 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.447089 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.447237 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.447263 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.447335 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.447362 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:16Z","lastTransitionTime":"2026-03-10T18:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.468384 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.474292 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.474343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.474360 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.474385 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.474403 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:16Z","lastTransitionTime":"2026-03-10T18:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.494423 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.500366 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.500426 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.500452 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.500482 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.500503 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:16Z","lastTransitionTime":"2026-03-10T18:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.551462 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.556779 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.556825 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.556841 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.556861 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.556876 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:16Z","lastTransitionTime":"2026-03-10T18:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.587787 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:16 crc kubenswrapper[4861]: E0310 18:50:16.588019 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:16 crc kubenswrapper[4861]: I0310 18:50:16.980862 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:16Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.005512 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.042401 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.059650 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.079762 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.103463 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: E0310 18:50:17.124060 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.125132 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.142112 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.158187 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.174924 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.189885 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.210296 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.228208 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.252519 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.270502 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.303375 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.319113 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.335602 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.354610 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:17Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.957530 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.957541 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.957676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:17 crc kubenswrapper[4861]: E0310 18:50:17.957865 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:17 crc kubenswrapper[4861]: I0310 18:50:17.957887 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:17 crc kubenswrapper[4861]: E0310 18:50:17.958153 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:17 crc kubenswrapper[4861]: E0310 18:50:17.958196 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:17 crc kubenswrapper[4861]: E0310 18:50:17.958278 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:19 crc kubenswrapper[4861]: I0310 18:50:19.957258 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:19 crc kubenswrapper[4861]: I0310 18:50:19.957276 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:19 crc kubenswrapper[4861]: I0310 18:50:19.957348 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:19 crc kubenswrapper[4861]: I0310 18:50:19.957431 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:19 crc kubenswrapper[4861]: E0310 18:50:19.957430 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:19 crc kubenswrapper[4861]: E0310 18:50:19.957647 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:19 crc kubenswrapper[4861]: E0310 18:50:19.957796 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:19 crc kubenswrapper[4861]: E0310 18:50:19.957919 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:21 crc kubenswrapper[4861]: I0310 18:50:21.958160 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:21 crc kubenswrapper[4861]: I0310 18:50:21.958192 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:21 crc kubenswrapper[4861]: I0310 18:50:21.958178 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:21 crc kubenswrapper[4861]: E0310 18:50:21.958361 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:21 crc kubenswrapper[4861]: I0310 18:50:21.958419 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:21 crc kubenswrapper[4861]: E0310 18:50:21.958572 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:21 crc kubenswrapper[4861]: E0310 18:50:21.958690 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:21 crc kubenswrapper[4861]: E0310 18:50:21.958813 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:22 crc kubenswrapper[4861]: E0310 18:50:22.125363 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:23 crc kubenswrapper[4861]: I0310 18:50:23.957466 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:23 crc kubenswrapper[4861]: I0310 18:50:23.957533 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:23 crc kubenswrapper[4861]: I0310 18:50:23.957597 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:23 crc kubenswrapper[4861]: E0310 18:50:23.957652 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:23 crc kubenswrapper[4861]: I0310 18:50:23.957688 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:23 crc kubenswrapper[4861]: E0310 18:50:23.957889 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:23 crc kubenswrapper[4861]: E0310 18:50:23.958042 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:23 crc kubenswrapper[4861]: E0310 18:50:23.958166 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:25 crc kubenswrapper[4861]: I0310 18:50:25.957284 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:25 crc kubenswrapper[4861]: I0310 18:50:25.957347 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:25 crc kubenswrapper[4861]: E0310 18:50:25.957897 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:25 crc kubenswrapper[4861]: I0310 18:50:25.957360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:25 crc kubenswrapper[4861]: I0310 18:50:25.957261 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:25 crc kubenswrapper[4861]: E0310 18:50:25.958057 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:25 crc kubenswrapper[4861]: E0310 18:50:25.958178 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:25 crc kubenswrapper[4861]: E0310 18:50:25.958310 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.849754 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.849823 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.849844 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.849871 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.849891 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:26Z","lastTransitionTime":"2026-03-10T18:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.871220 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.875891 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.875948 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.875965 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.875989 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.876006 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:26Z","lastTransitionTime":"2026-03-10T18:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.895896 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.901746 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.901802 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.901822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.901845 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.901862 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:26Z","lastTransitionTime":"2026-03-10T18:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.921422 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.926010 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.926112 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.926130 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.926155 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.926173 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:26Z","lastTransitionTime":"2026-03-10T18:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.945552 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.950401 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.950448 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.950465 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.950488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.950504 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:26Z","lastTransitionTime":"2026-03-10T18:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.962322 4861 scope.go:117] "RemoveContainer" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.962850 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.969533 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:26 crc kubenswrapper[4861]: E0310 18:50:26.969778 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:26 crc kubenswrapper[4861]: I0310 18:50:26.979138 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.001944 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:26Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.031855 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.048193 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.064701 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.081550 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.101663 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.117065 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: E0310 18:50:27.126228 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.136111 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.156843 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.173577 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.190824 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.206954 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.228352 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.245282 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.275670 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.290660 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.350866 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.369281 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:27Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.957776 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.957836 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.957896 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:27 crc kubenswrapper[4861]: I0310 18:50:27.957781 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:27 crc kubenswrapper[4861]: E0310 18:50:27.957969 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:27 crc kubenswrapper[4861]: E0310 18:50:27.958177 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:27 crc kubenswrapper[4861]: E0310 18:50:27.958316 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:27 crc kubenswrapper[4861]: E0310 18:50:27.958529 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:29 crc kubenswrapper[4861]: I0310 18:50:29.957910 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:29 crc kubenswrapper[4861]: I0310 18:50:29.957984 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:29 crc kubenswrapper[4861]: E0310 18:50:29.958108 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:29 crc kubenswrapper[4861]: I0310 18:50:29.958151 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:29 crc kubenswrapper[4861]: I0310 18:50:29.958225 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:29 crc kubenswrapper[4861]: E0310 18:50:29.958424 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:29 crc kubenswrapper[4861]: E0310 18:50:29.958601 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:29 crc kubenswrapper[4861]: E0310 18:50:29.958757 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:31 crc kubenswrapper[4861]: I0310 18:50:31.958082 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:31 crc kubenswrapper[4861]: I0310 18:50:31.958127 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:31 crc kubenswrapper[4861]: I0310 18:50:31.958205 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:31 crc kubenswrapper[4861]: I0310 18:50:31.958102 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:31 crc kubenswrapper[4861]: E0310 18:50:31.958287 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:31 crc kubenswrapper[4861]: E0310 18:50:31.958423 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:31 crc kubenswrapper[4861]: E0310 18:50:31.958569 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:31 crc kubenswrapper[4861]: E0310 18:50:31.958756 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:32 crc kubenswrapper[4861]: E0310 18:50:32.128228 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.800942 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/0.log" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.801022 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1c251f4-6539-4aa1-8979-47e74495aca3" containerID="3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee" exitCode=1 Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.801084 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerDied","Data":"3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee"} Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.802045 4861 scope.go:117] "RemoveContainer" containerID="3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.824223 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.873625 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.895203 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.911843 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.936814 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.958040 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.977519 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:32 crc kubenswrapper[4861]: I0310 18:50:32.990849 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:32Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.003516 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.026383 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.047261 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.067404 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.101629 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.118625 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.141358 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.158749 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.179472 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.196966 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.216847 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.808241 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/0.log" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.808343 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerStarted","Data":"691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c"} Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.827286 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.860617 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.879343 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.896386 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.918372 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.943169 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.957676 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.957743 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.957692 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:33 crc kubenswrapper[4861]: E0310 18:50:33.957857 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:33 crc kubenswrapper[4861]: E0310 18:50:33.958028 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:33 crc kubenswrapper[4861]: E0310 18:50:33.958149 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.958465 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:33 crc kubenswrapper[4861]: E0310 18:50:33.958599 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.962543 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.981475 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:33 crc kubenswrapper[4861]: I0310 18:50:33.997702 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:33Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.018916 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.039910 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.062157 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.093790 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.111757 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.134123 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.152421 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.177879 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.194044 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:34 crc kubenswrapper[4861]: I0310 18:50:34.210971 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:34Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:35 crc kubenswrapper[4861]: I0310 18:50:35.957098 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:35 crc kubenswrapper[4861]: I0310 18:50:35.957131 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:35 crc kubenswrapper[4861]: I0310 18:50:35.957187 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:35 crc kubenswrapper[4861]: E0310 18:50:35.957280 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:35 crc kubenswrapper[4861]: I0310 18:50:35.957373 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:35 crc kubenswrapper[4861]: E0310 18:50:35.957495 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:35 crc kubenswrapper[4861]: E0310 18:50:35.957570 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:35 crc kubenswrapper[4861]: E0310 18:50:35.957630 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:36 crc kubenswrapper[4861]: I0310 18:50:36.978258 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:36Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:36 crc kubenswrapper[4861]: I0310 18:50:36.997420 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:36Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.012621 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.031325 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.059999 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.073659 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.089032 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.110591 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.115842 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.115887 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.115905 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.115927 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.115944 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:37Z","lastTransitionTime":"2026-03-10T18:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.129399 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.135887 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.140208 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.144758 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.144810 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.144834 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.144863 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.144888 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:37Z","lastTransitionTime":"2026-03-10T18:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.154420 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.164953 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.170270 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.170330 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.170351 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.170374 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.170391 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:37Z","lastTransitionTime":"2026-03-10T18:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.174167 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.192309 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197246 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197297 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197314 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197275 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197338 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.197446 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:37Z","lastTransitionTime":"2026-03-10T18:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.217129 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.218110 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.223524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.223624 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.223652 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.223767 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.223879 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:37Z","lastTransitionTime":"2026-03-10T18:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.241485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.245901 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.246119 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.259993 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.284905 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.303880 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.337547 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.357506 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:37Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.958023 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.958046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.958557 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.958123 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.959146 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:37 crc kubenswrapper[4861]: I0310 18:50:37.958062 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.959551 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:37 crc kubenswrapper[4861]: E0310 18:50:37.958738 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:38 crc kubenswrapper[4861]: I0310 18:50:38.394448 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:38 crc kubenswrapper[4861]: E0310 18:50:38.394622 4861 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:50:38 crc kubenswrapper[4861]: E0310 18:50:38.394695 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs podName:c06e51d0-e817-41ac-9d69-3ef2099f8ba8 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.394671183 +0000 UTC m=+246.158107183 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs") pod "network-metrics-daemon-2rvxn" (UID: "c06e51d0-e817-41ac-9d69-3ef2099f8ba8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 10 18:50:39 crc kubenswrapper[4861]: I0310 18:50:39.957820 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:39 crc kubenswrapper[4861]: E0310 18:50:39.957985 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:39 crc kubenswrapper[4861]: I0310 18:50:39.958230 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:39 crc kubenswrapper[4861]: E0310 18:50:39.958320 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:39 crc kubenswrapper[4861]: I0310 18:50:39.958500 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:39 crc kubenswrapper[4861]: E0310 18:50:39.958590 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:39 crc kubenswrapper[4861]: I0310 18:50:39.958824 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:39 crc kubenswrapper[4861]: E0310 18:50:39.958919 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:41 crc kubenswrapper[4861]: I0310 18:50:41.957856 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:41 crc kubenswrapper[4861]: I0310 18:50:41.957934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:41 crc kubenswrapper[4861]: I0310 18:50:41.957941 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:41 crc kubenswrapper[4861]: E0310 18:50:41.958028 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:41 crc kubenswrapper[4861]: I0310 18:50:41.958218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:41 crc kubenswrapper[4861]: E0310 18:50:41.958202 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:41 crc kubenswrapper[4861]: E0310 18:50:41.958843 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:41 crc kubenswrapper[4861]: E0310 18:50:41.959007 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:41 crc kubenswrapper[4861]: I0310 18:50:41.959562 4861 scope.go:117] "RemoveContainer" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" Mar 10 18:50:42 crc kubenswrapper[4861]: E0310 18:50:42.130290 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.842475 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/2.log" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.845738 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9"} Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.846151 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.866061 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.885404 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.900395 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.921966 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.954931 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.972113 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:42 crc kubenswrapper[4861]: I0310 18:50:42.994523 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.020438 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.039850 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.057551 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.074252 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.092669 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.110135 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.129446 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.144289 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.166453 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.185484 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.218358 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.237473 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.851470 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/3.log" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.852525 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/2.log" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.870845 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" exitCode=1 Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.870904 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9"} Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.870965 4861 scope.go:117] "RemoveContainer" containerID="c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.871877 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 18:50:43 crc kubenswrapper[4861]: E0310 18:50:43.872141 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.894259 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.915560 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.931953 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.957934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.957976 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.957999 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.957939 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:43 crc kubenswrapper[4861]: E0310 18:50:43.958112 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:43 crc kubenswrapper[4861]: E0310 18:50:43.958268 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:43 crc kubenswrapper[4861]: E0310 18:50:43.958375 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:43 crc kubenswrapper[4861]: E0310 18:50:43.958466 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.970819 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80b530ddd55aac69b923f9a8fa0028c2220243566a0d8e111ccc746aab692c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:13Z\\\",\\\"message\\\":\\\"/informers/factory.go:160\\\\nI0310 18:50:12.991923 7138 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992087 7138 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992244 7138 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992318 7138 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992475 7138 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0310 18:50:12.992661 7138 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.992877 7138 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0310 18:50:12.993154 7138 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0310 18:50:12.993240 7138 factory.go:656] Stopping watch factory\\\\nI0310 18:50:12.993262 7138 ovnkube.go:599] Stopped ovnkube\\\\nI0310 18:50:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:43Z\\\",\\\"message\\\":\\\"run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z]\\\\nI0310 18:50:43.003048 7488 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0310 18:50:43.003062 7488 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-pzmsp\\\\nI0310 18:50:43.003094 7488 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-pzmsp in node crc\\\\nI0310 18:50:43.001953 7488 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:43 crc kubenswrapper[4861]: I0310 18:50:43.986647 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:43Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.002129 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.018365 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.036644 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.056012 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.072972 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.091885 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.108448 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.127063 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.140864 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.163511 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.181680 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.214786 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.231539 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.249975 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.877867 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/3.log" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.883475 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 18:50:44 crc kubenswrapper[4861]: E0310 18:50:44.883911 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.904265 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.925096 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.943338 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:44 crc kubenswrapper[4861]: I0310 18:50:44.991175 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:44Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.024986 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:43Z\\\",\\\"message\\\":\\\"run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z]\\\\nI0310 18:50:43.003048 7488 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0310 18:50:43.003062 7488 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-pzmsp\\\\nI0310 18:50:43.003094 7488 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-pzmsp in node crc\\\\nI0310 18:50:43.001953 7488 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.041952 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.058000 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.078393 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.099411 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.115162 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.133199 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.151369 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.168894 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.185879 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.201495 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.223303 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.238017 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.267683 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.288530 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:45Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.957586 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.957702 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.957744 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:45 crc kubenswrapper[4861]: E0310 18:50:45.957834 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:45 crc kubenswrapper[4861]: E0310 18:50:45.957990 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:45 crc kubenswrapper[4861]: I0310 18:50:45.958080 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:45 crc kubenswrapper[4861]: E0310 18:50:45.958157 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:45 crc kubenswrapper[4861]: E0310 18:50:45.958238 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:46 crc kubenswrapper[4861]: I0310 18:50:46.983093 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:46Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.002814 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.033129 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:43Z\\\",\\\"message\\\":\\\"run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z]\\\\nI0310 18:50:43.003048 7488 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0310 18:50:43.003062 7488 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-pzmsp\\\\nI0310 18:50:43.003094 7488 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-pzmsp in node crc\\\\nI0310 18:50:43.001953 7488 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.045978 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.062085 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.083910 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.106249 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.123874 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.131415 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.143384 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.165790 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.184791 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.204036 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.224485 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.249853 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.271074 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.281515 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.281557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.281571 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.281587 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.281599 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:47Z","lastTransitionTime":"2026-03-10T18:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.300558 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.304955 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.306219 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.306299 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.306325 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.306358 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.306381 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:47Z","lastTransitionTime":"2026-03-10T18:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.323566 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.328194 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.333254 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.333291 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.333302 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.333318 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.333330 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:47Z","lastTransitionTime":"2026-03-10T18:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.344055 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.351110 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.355488 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.355539 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.355557 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.355581 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.355613 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:47Z","lastTransitionTime":"2026-03-10T18:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.361875 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.375097 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.379950 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.380023 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.380041 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.380069 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.380090 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:47Z","lastTransitionTime":"2026-03-10T18:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.400618 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:47Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.400992 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.957943 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.958037 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.958140 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.958161 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:47 crc kubenswrapper[4861]: I0310 18:50:47.958182 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.958294 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.958867 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:47 crc kubenswrapper[4861]: E0310 18:50:47.959104 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:49 crc kubenswrapper[4861]: I0310 18:50:49.957980 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:49 crc kubenswrapper[4861]: I0310 18:50:49.958037 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:49 crc kubenswrapper[4861]: I0310 18:50:49.958038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:49 crc kubenswrapper[4861]: I0310 18:50:49.958095 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:49 crc kubenswrapper[4861]: E0310 18:50:49.958428 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:49 crc kubenswrapper[4861]: E0310 18:50:49.958550 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:49 crc kubenswrapper[4861]: E0310 18:50:49.958660 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:49 crc kubenswrapper[4861]: E0310 18:50:49.958808 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:51 crc kubenswrapper[4861]: I0310 18:50:51.957968 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:51 crc kubenswrapper[4861]: I0310 18:50:51.958030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:51 crc kubenswrapper[4861]: I0310 18:50:51.958008 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:51 crc kubenswrapper[4861]: I0310 18:50:51.958140 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:51 crc kubenswrapper[4861]: E0310 18:50:51.958139 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:51 crc kubenswrapper[4861]: E0310 18:50:51.958273 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:51 crc kubenswrapper[4861]: E0310 18:50:51.958398 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:51 crc kubenswrapper[4861]: E0310 18:50:51.958542 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:52 crc kubenswrapper[4861]: E0310 18:50:52.133293 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:53 crc kubenswrapper[4861]: I0310 18:50:53.957283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:53 crc kubenswrapper[4861]: I0310 18:50:53.957349 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:53 crc kubenswrapper[4861]: I0310 18:50:53.957372 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:53 crc kubenswrapper[4861]: I0310 18:50:53.957621 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:53 crc kubenswrapper[4861]: E0310 18:50:53.958419 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:53 crc kubenswrapper[4861]: E0310 18:50:53.958985 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:53 crc kubenswrapper[4861]: E0310 18:50:53.958766 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:53 crc kubenswrapper[4861]: E0310 18:50:53.958883 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:55 crc kubenswrapper[4861]: I0310 18:50:55.958000 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:55 crc kubenswrapper[4861]: I0310 18:50:55.958055 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:55 crc kubenswrapper[4861]: I0310 18:50:55.958055 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:55 crc kubenswrapper[4861]: I0310 18:50:55.958006 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:55 crc kubenswrapper[4861]: E0310 18:50:55.958229 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:55 crc kubenswrapper[4861]: E0310 18:50:55.958339 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:55 crc kubenswrapper[4861]: E0310 18:50:55.958493 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:55 crc kubenswrapper[4861]: E0310 18:50:55.958643 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:56 crc kubenswrapper[4861]: I0310 18:50:56.959040 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 18:50:56 crc kubenswrapper[4861]: E0310 18:50:56.959541 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:50:56 crc kubenswrapper[4861]: I0310 18:50:56.985666 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pzmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66631e59-ce5c-44de-8a4d-37eb82acf997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1293bf47ec5a041886f7065f624cec3b882bb62394a2c448e138d5f7bc3a1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pm4t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pzmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:56Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.005304 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e4302e6-1187-4371-8523-a96ec8032e74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9439c03c3e58276508f7abe48482c4c69521049dbad5ea7a2e7e7379c8c58914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77513d5a93f447545a8b2aef460c75d3d73de32163650349268f203e1ade4b4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vh2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmmgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.023524 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fabebda74f7f7605bc2f261bbf64acac1fe4ec65a267beef027413b454bee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb86a65e5c4fda1cd29396eb9ad02739f5123519d07aef2eea3354f4a65571d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.041491 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a665f7a5f04a80f43377d87b895e0d95adae296d12f0826c3d97323e145d602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.057678 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6lblg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1c251f4-6539-4aa1-8979-47e74495aca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:32Z\\\",\\\"message\\\":\\\"2026-03-10T18:49:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94\\\\n2026-03-10T18:49:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fec7ce1-964a-40de-8918-768dde436f94 to /host/opt/cni/bin/\\\\n2026-03-10T18:49:47Z [verbose] multus-daemon started\\\\n2026-03-10T18:49:47Z [verbose] Readiness Indicator file check\\\\n2026-03-10T18:50:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:50:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2gvk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6lblg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.072379 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"771189c2-452d-4204-a0b7-abfe9ba62bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b29575454354d2a034781fdd40e9972ddd08328eea1a3bb89377fcab4181cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tng72\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qttbr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.094391 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j2s27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"391f4bfa-b94c-4b25-8f06-a2f19f912194\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf0e311a00fea576e9f2c739cd2d87aedfc06aaef5cf0c68374bd67241888b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02cd3948c6ba0e290e32d64b2390da20510d66f20771b9bb0b98957e8900b124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c010603b11cab781dd3c14d84121d60b8115bb415df567ef1c267406b9e6988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ba322572d96fe13a88815151416495decdd5f486eb30b3fd4345040c5b9d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e74b8fb41e5e765f1ba84b4f1efce28077537df837e4cc308e70b16139732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f43980d19ca7b9caf61de8f3257dff536ff78a4cf530f6ec3cb7c91f1a3b5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2779899b3339467fdfac0db5ea7b8e4f306ca22205f2c44cc85a2df7fc2cc066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlddg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j2s27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.111836 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"745fc7ce-a025-4236-a613-2d2c1661aa2e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098109abecb73d2ad721a447bdfff2c9e9c2d24b969013d6108b0ac9d1d61e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc83538f8ad05533765a1730072969d529579cd76ff33a77dc49b0bff15caad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e4029a918a6ce3cba59cacb37169816c0ce5b734d46605756ede5985ccd686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cb1fb5ba7aa6d8d7184d85c870898f3b51dd4910da9346b24b7d63f952467d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.133992 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.153040 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"952bf490-6587-4240-a832-feac082e7775\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf6fe1422055e59455ca71c1a22213f55312c80667526cf13dbf61f7ccea7c75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7523ddf0ac38e3b59b767d7ef95f452f24c5914734d6f1ac0187f1bede5fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1d8f9a97293ae86fe0b8c9ab76600caf04291ec3ddd2e47a071805bef675da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd5039046658f4b595c747b6434cb0ef3befbf75874a6dab629cdbd6634524e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6234508294e7f87d3a8da0d3a1d8ecb96fffc3dbd5974e40a3bb7ceeee0d2a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34f62205737b2bb279cc7a0e2ffee213392c8135cca742b76e6f7ed44ddca754\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49cc08d4ed6e0fd87f4c957414e510f99511b9654a72324b707c165206c4979e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0995183765c106ba8369d3c05ac572596329eb1978876b770e327a430963ba9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.173899 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.194320 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"046b6ea2-2e19-4917-953a-eb8aca6d80cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df0d545a88d84a8015bafb1061f16f446f63c97065dc88c869a33a24b8e3c22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da2dbdc794693bb8da08c0fcc84531d65b468d135904b7055fb926fbd0cce95d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:03Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0310 18:47:39.105265 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0310 18:47:39.107559 1 observer_polling.go:159] Starting file observer\\\\nI0310 18:47:39.140119 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0310 18:47:39.145777 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0310 18:48:03.686072 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0310 18:48:03.686208 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:48:03Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31bbed2d81ace88f31b763f3b4bed57db657bf9e78413b57b2f279b408e0b848\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7156e2106372814a5e2b2816352ee308bb4fdffef5efe8da5bc39d3bc29398\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.214617 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.231106 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-b87lw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e474cdc9-b374-49a6-aece-afa19f8d5ee6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f06ce6a80200c39385f53697c2f3fbbec862effc04ddf9e0499d9c20761a54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t7pf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-b87lw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.251354 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.286843 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be820cd7-b3a7-4183-a408-67151247b6ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-10T18:50:43Z\\\",\\\"message\\\":\\\"run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:42Z is after 2025-08-24T17:21:41Z]\\\\nI0310 18:50:43.003048 7488 services_controller.go:454] Service openshift-operator-lifecycle-manager/catalog-operator-metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0310 18:50:43.003062 7488 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-pzmsp\\\\nI0310 18:50:43.003094 7488 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-pzmsp in node crc\\\\nI0310 18:50:43.001953 7488 event.go:377] Event(v1.ObjectRefere\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:50:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:49:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwtwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s2l62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.313394 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cjz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:49:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2rvxn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.332202 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a964dbf-d067-436c-9a8b-496fc69b0584\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://716832664d1be93a0faa28c9e6260cbda820fe51b3295faf5dc0a852b242d62d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a060129f36e562bcd541215d99123f79a1b0216c8c864b21b72c98fda5069040\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.354192 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c65aa66-5db2-421b-ad46-0ff1e2b1cb22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-10T18:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-10T18:48:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW0310 18:48:50.587139 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0310 18:48:50.587315 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0310 18:48:50.588407 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4176947974/tls.crt::/tmp/serving-cert-4176947974/tls.key\\\\\\\"\\\\nI0310 18:48:50.773986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0310 18:48:50.776438 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0310 18:48:50.776455 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0310 18:48:50.776477 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0310 18:48:50.776482 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0310 18:48:50.783076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0310 18:48:50.783088 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0310 18:48:50.783118 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0310 18:48:50.783146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0310 18:48:50.783157 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0310 18:48:50.783167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0310 18:48:50.783173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0310 18:48:50.784467 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-10T18:48:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:47:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-10T18:47:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-10T18:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-10T18:47:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.376968 4861 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:48:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-10T18:49:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ae4cf4d7c980d5db6180fb412216b50b2ddb8f25ea2a5fb1e034d47ddfafac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-10T18:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.631678 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.631775 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.631797 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.631822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.631841 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:57Z","lastTransitionTime":"2026-03-10T18:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.655817 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.660524 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.660572 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.660590 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.660611 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.660627 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:57Z","lastTransitionTime":"2026-03-10T18:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.680913 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.686273 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.686343 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.686364 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.686389 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.686409 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:57Z","lastTransitionTime":"2026-03-10T18:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.706429 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.711036 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.711113 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.711136 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.711167 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.711190 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:57Z","lastTransitionTime":"2026-03-10T18:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.734077 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.738822 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.738872 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.738891 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.738914 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.738931 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:50:57Z","lastTransitionTime":"2026-03-10T18:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.759588 4861 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-10T18:50:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"19532032-9073-404f-bda8-c4343aa30670\\\",\\\"systemUUID\\\":\\\"b4ef8d49-23f5-4cae-bbac-08586c607b9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-10T18:50:57Z is after 2025-08-24T17:21:41Z" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.759845 4861 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.957157 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.957246 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.957272 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:57 crc kubenswrapper[4861]: I0310 18:50:57.957182 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.957379 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.957518 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.957626 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:57 crc kubenswrapper[4861]: E0310 18:50:57.957749 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:59 crc kubenswrapper[4861]: I0310 18:50:59.957375 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:50:59 crc kubenswrapper[4861]: I0310 18:50:59.957382 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:50:59 crc kubenswrapper[4861]: I0310 18:50:59.957428 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:50:59 crc kubenswrapper[4861]: E0310 18:50:59.958842 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:50:59 crc kubenswrapper[4861]: I0310 18:50:59.957935 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:50:59 crc kubenswrapper[4861]: E0310 18:50:59.959037 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:50:59 crc kubenswrapper[4861]: E0310 18:50:59.959107 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:50:59 crc kubenswrapper[4861]: E0310 18:50:59.959407 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:01 crc kubenswrapper[4861]: I0310 18:51:01.957387 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:01 crc kubenswrapper[4861]: I0310 18:51:01.957451 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:01 crc kubenswrapper[4861]: I0310 18:51:01.957494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:01 crc kubenswrapper[4861]: I0310 18:51:01.957494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:01 crc kubenswrapper[4861]: E0310 18:51:01.957566 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:01 crc kubenswrapper[4861]: E0310 18:51:01.957677 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:01 crc kubenswrapper[4861]: E0310 18:51:01.957855 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:01 crc kubenswrapper[4861]: E0310 18:51:01.958001 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:02 crc kubenswrapper[4861]: E0310 18:51:02.136317 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.892922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.893084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.893229 4861 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.893252 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:53:05.893209399 +0000 UTC m=+329.656645389 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.893304 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:53:05.89328228 +0000 UTC m=+329.656718270 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.957377 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.957488 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.957399 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.957377 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.957590 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.957666 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.957886 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.957999 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.994267 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.994352 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:03 crc kubenswrapper[4861]: I0310 18:51:03.994415 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994446 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994485 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994507 4861 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994568 4861 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994575 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-10 18:53:05.994549711 +0000 UTC m=+329.757985701 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994634 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994659 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-10 18:53:05.994636993 +0000 UTC m=+329.758073003 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994693 4861 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994759 4861 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:51:03 crc kubenswrapper[4861]: E0310 18:51:03.994870 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-10 18:53:05.994836376 +0000 UTC m=+329.758272376 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 10 18:51:05 crc kubenswrapper[4861]: I0310 18:51:05.958134 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:05 crc kubenswrapper[4861]: I0310 18:51:05.958227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:05 crc kubenswrapper[4861]: E0310 18:51:05.958355 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:05 crc kubenswrapper[4861]: I0310 18:51:05.958378 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:05 crc kubenswrapper[4861]: I0310 18:51:05.958397 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:05 crc kubenswrapper[4861]: E0310 18:51:05.958786 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:05 crc kubenswrapper[4861]: E0310 18:51:05.959052 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:05 crc kubenswrapper[4861]: E0310 18:51:05.959206 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.014353 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podStartSLOduration=152.014334002 podStartE2EDuration="2m32.014334002s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:06.987495242 +0000 UTC m=+210.750931272" watchObservedRunningTime="2026-03-10 18:51:07.014334002 +0000 UTC m=+210.777769962" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.034847 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j2s27" podStartSLOduration=152.034782619 podStartE2EDuration="2m32.034782619s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.014672997 +0000 UTC m=+210.778109047" watchObservedRunningTime="2026-03-10 18:51:07.034782619 +0000 UTC m=+210.798218609" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.035365 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=78.035353688 podStartE2EDuration="1m18.035353688s" podCreationTimestamp="2026-03-10 18:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.034692188 +0000 UTC m=+210.798128218" watchObservedRunningTime="2026-03-10 18:51:07.035353688 +0000 UTC m=+210.798789688" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.077395 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=124.077363601 podStartE2EDuration="2m4.077363601s" podCreationTimestamp="2026-03-10 18:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.07665767 +0000 UTC m=+210.840093710" watchObservedRunningTime="2026-03-10 18:51:07.077363601 +0000 UTC m=+210.840799601" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.131868 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.131800613 podStartE2EDuration="1m31.131800613s" podCreationTimestamp="2026-03-10 18:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.129013239 +0000 UTC m=+210.892449279" watchObservedRunningTime="2026-03-10 18:51:07.131800613 +0000 UTC m=+210.895236613" Mar 10 18:51:07 crc kubenswrapper[4861]: E0310 18:51:07.137137 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.169930 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-b87lw" podStartSLOduration=152.169890213 podStartE2EDuration="2m32.169890213s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.1684357 +0000 UTC m=+210.931871700" watchObservedRunningTime="2026-03-10 18:51:07.169890213 +0000 UTC m=+210.933326213" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.257237 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=61.257217782 podStartE2EDuration="1m1.257217782s" podCreationTimestamp="2026-03-10 18:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.256946577 +0000 UTC m=+211.020382607" watchObservedRunningTime="2026-03-10 18:51:07.257217782 +0000 UTC m=+211.020653752" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.275987 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=124.275953981 podStartE2EDuration="2m4.275953981s" podCreationTimestamp="2026-03-10 18:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.275593726 +0000 UTC m=+211.039029766" watchObservedRunningTime="2026-03-10 18:51:07.275953981 +0000 UTC m=+211.039389991" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.317539 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pzmsp" podStartSLOduration=152.317501707 podStartE2EDuration="2m32.317501707s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.315644777 +0000 UTC m=+211.079080777" watchObservedRunningTime="2026-03-10 18:51:07.317501707 +0000 UTC m=+211.080937697" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.333879 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmmgv" podStartSLOduration=151.333853779 podStartE2EDuration="2m31.333853779s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.333091907 +0000 UTC m=+211.096527937" watchObservedRunningTime="2026-03-10 18:51:07.333853779 +0000 UTC m=+211.097289749" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.393134 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6lblg" podStartSLOduration=152.393109358 podStartE2EDuration="2m32.393109358s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:07.392998396 +0000 UTC m=+211.156434436" watchObservedRunningTime="2026-03-10 18:51:07.393109358 +0000 UTC m=+211.156545358" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.947507 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.947569 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.947591 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.947617 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.947638 4861 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T18:51:07Z","lastTransitionTime":"2026-03-10T18:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.958107 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.958188 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.958156 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:07 crc kubenswrapper[4861]: E0310 18:51:07.958363 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:07 crc kubenswrapper[4861]: I0310 18:51:07.958483 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:07 crc kubenswrapper[4861]: E0310 18:51:07.958695 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:07 crc kubenswrapper[4861]: E0310 18:51:07.958848 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:07 crc kubenswrapper[4861]: E0310 18:51:07.958966 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.016229 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7"] Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.016911 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.020298 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.020653 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.020917 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.021205 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.034025 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.034145 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.034190 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.034247 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.034293 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.057956 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.066974 4861 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135921 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.135953 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.136073 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.137154 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.145696 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.165019 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2d0feff-2847-4bb6-af87-0e26ffb5b6b1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5d9k7\" (UID: \"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.341793 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.977191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" event={"ID":"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1","Type":"ContainerStarted","Data":"c9213d9e7fa455ff405109f971b1404a4319dde33aac308d6b31303fe298843b"} Mar 10 18:51:08 crc kubenswrapper[4861]: I0310 18:51:08.977256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" event={"ID":"c2d0feff-2847-4bb6-af87-0e26ffb5b6b1","Type":"ContainerStarted","Data":"6dfaf36725648ca06a9ea1094162ceadfcc596a6e8d43c47abe7f73b9de24863"} Mar 10 18:51:09 crc kubenswrapper[4861]: I0310 18:51:09.003144 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5d9k7" podStartSLOduration=154.003113621 podStartE2EDuration="2m34.003113621s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:09.001395603 +0000 UTC m=+212.764831613" watchObservedRunningTime="2026-03-10 18:51:09.003113621 +0000 UTC m=+212.766549611" Mar 10 18:51:09 crc kubenswrapper[4861]: I0310 18:51:09.957747 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:09 crc kubenswrapper[4861]: I0310 18:51:09.957854 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:09 crc kubenswrapper[4861]: E0310 18:51:09.957930 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:09 crc kubenswrapper[4861]: I0310 18:51:09.957947 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:09 crc kubenswrapper[4861]: I0310 18:51:09.958012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:09 crc kubenswrapper[4861]: E0310 18:51:09.958130 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:09 crc kubenswrapper[4861]: E0310 18:51:09.958224 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:09 crc kubenswrapper[4861]: E0310 18:51:09.958396 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:11 crc kubenswrapper[4861]: I0310 18:51:11.957074 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:11 crc kubenswrapper[4861]: I0310 18:51:11.957099 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:11 crc kubenswrapper[4861]: I0310 18:51:11.957227 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:11 crc kubenswrapper[4861]: E0310 18:51:11.957392 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:11 crc kubenswrapper[4861]: I0310 18:51:11.957418 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:11 crc kubenswrapper[4861]: E0310 18:51:11.957622 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:11 crc kubenswrapper[4861]: E0310 18:51:11.957740 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:11 crc kubenswrapper[4861]: E0310 18:51:11.959516 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:11 crc kubenswrapper[4861]: I0310 18:51:11.960291 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 18:51:11 crc kubenswrapper[4861]: E0310 18:51:11.960781 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-s2l62_openshift-ovn-kubernetes(be820cd7-b3a7-4183-a408-67151247b6ee)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" Mar 10 18:51:12 crc kubenswrapper[4861]: E0310 18:51:12.138443 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:13 crc kubenswrapper[4861]: I0310 18:51:13.957293 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:13 crc kubenswrapper[4861]: I0310 18:51:13.957312 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:13 crc kubenswrapper[4861]: E0310 18:51:13.957436 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:13 crc kubenswrapper[4861]: E0310 18:51:13.957551 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:13 crc kubenswrapper[4861]: I0310 18:51:13.958021 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:13 crc kubenswrapper[4861]: E0310 18:51:13.958231 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:13 crc kubenswrapper[4861]: I0310 18:51:13.958061 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:13 crc kubenswrapper[4861]: E0310 18:51:13.958578 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:15 crc kubenswrapper[4861]: I0310 18:51:15.957878 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:15 crc kubenswrapper[4861]: I0310 18:51:15.957951 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:15 crc kubenswrapper[4861]: I0310 18:51:15.957996 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:15 crc kubenswrapper[4861]: E0310 18:51:15.959051 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:15 crc kubenswrapper[4861]: I0310 18:51:15.958026 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:15 crc kubenswrapper[4861]: E0310 18:51:15.959201 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:15 crc kubenswrapper[4861]: E0310 18:51:15.959379 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:15 crc kubenswrapper[4861]: E0310 18:51:15.959612 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:17 crc kubenswrapper[4861]: E0310 18:51:17.139088 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:17 crc kubenswrapper[4861]: I0310 18:51:17.957422 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:17 crc kubenswrapper[4861]: I0310 18:51:17.957495 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:17 crc kubenswrapper[4861]: I0310 18:51:17.957521 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:17 crc kubenswrapper[4861]: I0310 18:51:17.957463 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:17 crc kubenswrapper[4861]: E0310 18:51:17.957657 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:17 crc kubenswrapper[4861]: E0310 18:51:17.957848 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:17 crc kubenswrapper[4861]: E0310 18:51:17.957974 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:17 crc kubenswrapper[4861]: E0310 18:51:17.958160 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.015072 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/1.log" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.016894 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/0.log" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.017032 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1c251f4-6539-4aa1-8979-47e74495aca3" containerID="691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c" exitCode=1 Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.017114 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerDied","Data":"691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c"} Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.017224 4861 scope.go:117] "RemoveContainer" containerID="3736c6e9da4ea1e91d7046c054bf885a5319e339f205e71fde5cc9cfa5d630ee" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.018039 4861 scope.go:117] "RemoveContainer" containerID="691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c" Mar 10 18:51:19 crc kubenswrapper[4861]: E0310 18:51:19.018376 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6lblg_openshift-multus(d1c251f4-6539-4aa1-8979-47e74495aca3)\"" pod="openshift-multus/multus-6lblg" podUID="d1c251f4-6539-4aa1-8979-47e74495aca3" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.957260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.957335 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.957509 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:19 crc kubenswrapper[4861]: E0310 18:51:19.958156 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:19 crc kubenswrapper[4861]: E0310 18:51:19.957952 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:19 crc kubenswrapper[4861]: I0310 18:51:19.957524 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:19 crc kubenswrapper[4861]: E0310 18:51:19.958267 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:19 crc kubenswrapper[4861]: E0310 18:51:19.958418 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:20 crc kubenswrapper[4861]: I0310 18:51:20.024338 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/1.log" Mar 10 18:51:21 crc kubenswrapper[4861]: I0310 18:51:21.957690 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:21 crc kubenswrapper[4861]: I0310 18:51:21.957826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:21 crc kubenswrapper[4861]: I0310 18:51:21.957690 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:21 crc kubenswrapper[4861]: E0310 18:51:21.957926 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:21 crc kubenswrapper[4861]: E0310 18:51:21.958035 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:21 crc kubenswrapper[4861]: E0310 18:51:21.958212 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:21 crc kubenswrapper[4861]: I0310 18:51:21.958843 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:21 crc kubenswrapper[4861]: E0310 18:51:21.958971 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:22 crc kubenswrapper[4861]: E0310 18:51:22.140919 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:23 crc kubenswrapper[4861]: I0310 18:51:23.958149 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:23 crc kubenswrapper[4861]: I0310 18:51:23.958164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:23 crc kubenswrapper[4861]: I0310 18:51:23.958197 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:23 crc kubenswrapper[4861]: E0310 18:51:23.958381 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:23 crc kubenswrapper[4861]: I0310 18:51:23.958472 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:23 crc kubenswrapper[4861]: E0310 18:51:23.958669 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:23 crc kubenswrapper[4861]: E0310 18:51:23.958880 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:23 crc kubenswrapper[4861]: E0310 18:51:23.958965 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:23 crc kubenswrapper[4861]: I0310 18:51:23.959955 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 18:51:24 crc kubenswrapper[4861]: I0310 18:51:24.936919 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2rvxn"] Mar 10 18:51:24 crc kubenswrapper[4861]: I0310 18:51:24.937327 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:24 crc kubenswrapper[4861]: E0310 18:51:24.937412 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.045687 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/3.log" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.048214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerStarted","Data":"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680"} Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.048847 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.088146 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podStartSLOduration=170.088119534 podStartE2EDuration="2m50.088119534s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:25.085618674 +0000 UTC m=+228.849054714" watchObservedRunningTime="2026-03-10 18:51:25.088119534 +0000 UTC m=+228.851555534" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.957120 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.957182 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:25 crc kubenswrapper[4861]: E0310 18:51:25.957303 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:25 crc kubenswrapper[4861]: I0310 18:51:25.957351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:25 crc kubenswrapper[4861]: E0310 18:51:25.957503 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:25 crc kubenswrapper[4861]: E0310 18:51:25.957673 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:26 crc kubenswrapper[4861]: I0310 18:51:26.957958 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:26 crc kubenswrapper[4861]: E0310 18:51:26.960609 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:27 crc kubenswrapper[4861]: E0310 18:51:27.141510 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:27 crc kubenswrapper[4861]: I0310 18:51:27.957812 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:27 crc kubenswrapper[4861]: I0310 18:51:27.957846 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:27 crc kubenswrapper[4861]: I0310 18:51:27.957901 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:27 crc kubenswrapper[4861]: E0310 18:51:27.957992 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:27 crc kubenswrapper[4861]: E0310 18:51:27.958197 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:27 crc kubenswrapper[4861]: E0310 18:51:27.958357 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:28 crc kubenswrapper[4861]: I0310 18:51:28.957866 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:28 crc kubenswrapper[4861]: E0310 18:51:28.958349 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:29 crc kubenswrapper[4861]: I0310 18:51:29.958030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:29 crc kubenswrapper[4861]: I0310 18:51:29.958062 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:29 crc kubenswrapper[4861]: I0310 18:51:29.958154 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:29 crc kubenswrapper[4861]: E0310 18:51:29.958322 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:29 crc kubenswrapper[4861]: E0310 18:51:29.958445 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:29 crc kubenswrapper[4861]: E0310 18:51:29.958576 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:30 crc kubenswrapper[4861]: I0310 18:51:30.957274 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:30 crc kubenswrapper[4861]: E0310 18:51:30.957442 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:31 crc kubenswrapper[4861]: I0310 18:51:31.957402 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:31 crc kubenswrapper[4861]: I0310 18:51:31.957438 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:31 crc kubenswrapper[4861]: I0310 18:51:31.957537 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:31 crc kubenswrapper[4861]: E0310 18:51:31.957698 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:31 crc kubenswrapper[4861]: E0310 18:51:31.957991 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:31 crc kubenswrapper[4861]: E0310 18:51:31.958139 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:31 crc kubenswrapper[4861]: I0310 18:51:31.958264 4861 scope.go:117] "RemoveContainer" containerID="691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c" Mar 10 18:51:32 crc kubenswrapper[4861]: E0310 18:51:32.142757 4861 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 18:51:32 crc kubenswrapper[4861]: I0310 18:51:32.957478 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:32 crc kubenswrapper[4861]: E0310 18:51:32.957682 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:33 crc kubenswrapper[4861]: I0310 18:51:33.083305 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/1.log" Mar 10 18:51:33 crc kubenswrapper[4861]: I0310 18:51:33.083404 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerStarted","Data":"3c2c0f3e0f1bba66a24a5ab2d6d29c88d87421fe37c15f8e88503e9cd1a5d065"} Mar 10 18:51:33 crc kubenswrapper[4861]: I0310 18:51:33.957678 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:33 crc kubenswrapper[4861]: I0310 18:51:33.957764 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:33 crc kubenswrapper[4861]: I0310 18:51:33.957743 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:33 crc kubenswrapper[4861]: E0310 18:51:33.957910 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:33 crc kubenswrapper[4861]: E0310 18:51:33.958029 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:33 crc kubenswrapper[4861]: E0310 18:51:33.958522 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:34 crc kubenswrapper[4861]: I0310 18:51:34.957968 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:34 crc kubenswrapper[4861]: E0310 18:51:34.958107 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:35 crc kubenswrapper[4861]: I0310 18:51:35.957335 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:35 crc kubenswrapper[4861]: I0310 18:51:35.957461 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:35 crc kubenswrapper[4861]: E0310 18:51:35.957510 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:51:35 crc kubenswrapper[4861]: I0310 18:51:35.957359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:35 crc kubenswrapper[4861]: E0310 18:51:35.957682 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 10 18:51:35 crc kubenswrapper[4861]: E0310 18:51:35.957791 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 10 18:51:36 crc kubenswrapper[4861]: I0310 18:51:36.957461 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:36 crc kubenswrapper[4861]: E0310 18:51:36.960262 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2rvxn" podUID="c06e51d0-e817-41ac-9d69-3ef2099f8ba8" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.957764 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.957776 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.957824 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.962275 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.962449 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.962525 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 10 18:51:37 crc kubenswrapper[4861]: I0310 18:51:37.963110 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.396193 4861 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.442513 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.442840 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.445140 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4d9gw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.445425 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.445665 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.445937 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.447972 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.448782 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cffh2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.449318 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.451121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.458384 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.458471 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: W0310 18:51:38.458802 4861 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Mar 10 18:51:38 crc kubenswrapper[4861]: E0310 18:51:38.458839 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.458942 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.459114 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.459243 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.459972 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.460170 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.460192 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.475503 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.476070 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.476281 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.476389 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.476764 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.476840 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.477529 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.477890 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488530 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488530 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488682 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488763 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488847 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.488863 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489121 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489218 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489272 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489361 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489369 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489225 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489536 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489564 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.489585 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.490040 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-864wb"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.490408 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.496418 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.498512 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.499571 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.500954 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.501668 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gsv8s"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.501937 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.502144 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.502449 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.502477 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.502516 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.502455 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.512047 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.512461 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.512511 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.512683 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.512844 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.513177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.515993 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.516345 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.517451 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.518552 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.518638 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.519017 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.519164 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.519308 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.519446 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.521277 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.521376 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.521561 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.522805 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cqm4t"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.523296 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.548914 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rqlh4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.549893 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.550504 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.551178 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.551563 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.551629 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.551800 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.551812 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.552015 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.552232 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.552359 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553301 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553423 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553503 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553603 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553692 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553821 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553930 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.553961 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554016 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554056 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554098 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554167 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554265 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554283 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554323 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554357 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554437 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554565 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554635 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554109 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554288 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554961 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.554443 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.555127 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.555290 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.555604 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.555799 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwxrl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556265 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556378 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556486 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556491 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556561 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556575 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556643 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.556837 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.557311 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.557744 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qsq94"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.562287 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.563053 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.566351 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.567083 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.567231 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.567416 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.567549 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.567618 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.568149 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.568477 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.591439 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.592463 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595270 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-node-pullsecrets\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-serving-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595605 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-encryption-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595834 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595911 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-auth-proxy-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.595963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhtr\" (UniqueName: \"kubernetes.io/projected/955cac3b-f6ea-4573-adc7-5271ead0cf37-kube-api-access-dbhtr\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596143 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-config\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596193 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-serving-cert\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596220 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596235 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/955cac3b-f6ea-4573-adc7-5271ead0cf37-machine-approver-tls\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-client\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndsl\" (UniqueName: \"kubernetes.io/projected/f5cbf703-ecdc-42d3-b313-f69ec71399fa-kube-api-access-2ndsl\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a07572a-a5d5-41f6-9b62-5d608b529342-serving-cert\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596248 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596299 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596343 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c687b8f8-b83c-492e-833b-e3fecb23fc93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596360 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c687b8f8-b83c-492e-833b-e3fecb23fc93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596378 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj67h\" (UniqueName: \"kubernetes.io/projected/c687b8f8-b83c-492e-833b-e3fecb23fc93-kube-api-access-vj67h\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-image-import-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596410 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcs7\" (UniqueName: \"kubernetes.io/projected/9a07572a-a5d5-41f6-9b62-5d608b529342-kube-api-access-5jcs7\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596428 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvkk\" (UniqueName: \"kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-service-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596468 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596482 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596499 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.596538 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit-dir\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.597217 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.597312 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.597899 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.598859 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.603034 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.609279 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.611021 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.611447 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.612180 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rzhxp"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.612529 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.612796 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.612882 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.612901 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613258 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t52tz"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613473 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613600 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613779 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.613966 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.614036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.614117 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.614424 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.617357 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.623317 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.623881 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.627047 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.627519 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cffh2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.627536 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.627985 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.628479 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.634277 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552810-wv88x"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.634856 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.637146 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.637822 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-864wb"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.637926 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.639092 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.639572 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.639614 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.640785 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.642401 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.643055 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.645809 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.650885 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.651936 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.652237 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.654488 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.670966 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.671779 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4d9gw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.671822 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-456qk"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.672954 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.673121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.673360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.679346 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jbmn2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.680036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.681049 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gsv8s"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.682110 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.683535 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.684518 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.685013 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.685566 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.687089 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.689625 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cqm4t"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.690689 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.693607 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.693665 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.693677 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.695213 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t52tz"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.696138 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.697125 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwxrl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.697908 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-serving-cert\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.697938 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgp8h\" (UniqueName: \"kubernetes.io/projected/d4ea6578-d8bd-4dbc-baf4-4e6210419574-kube-api-access-rgp8h\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.697960 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.697984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zz2\" (UniqueName: \"kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698004 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh48n\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-kube-api-access-wh48n\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698027 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698045 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-config\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698086 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698098 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-config\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-serving-cert\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698164 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698207 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgwl\" (UniqueName: \"kubernetes.io/projected/48290e46-c619-4673-86b7-bf96f136d693-kube-api-access-rdgwl\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698223 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754666ad-657a-4e98-8feb-47f74d33dc3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698242 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9phb\" (UniqueName: \"kubernetes.io/projected/ed642e8b-2ae6-4db9-a035-0a568c32c47b-kube-api-access-p9phb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698277 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a8e67548-adc0-4c5c-8115-efff12bad9ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8j2\" (UniqueName: \"kubernetes.io/projected/a8e67548-adc0-4c5c-8115-efff12bad9ae-kube-api-access-nj8j2\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698311 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprcp\" (UniqueName: \"kubernetes.io/projected/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-kube-api-access-fprcp\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698345 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/955cac3b-f6ea-4573-adc7-5271ead0cf37-machine-approver-tls\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a07572a-a5d5-41f6-9b62-5d608b529342-serving-cert\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698415 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tt92\" (UniqueName: \"kubernetes.io/projected/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-kube-api-access-5tt92\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698431 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698467 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-client\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndsl\" (UniqueName: \"kubernetes.io/projected/f5cbf703-ecdc-42d3-b313-f69ec71399fa-kube-api-access-2ndsl\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698503 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-images\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e67548-adc0-4c5c-8115-efff12bad9ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698541 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48290e46-c619-4673-86b7-bf96f136d693-audit-dir\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdtx\" (UniqueName: \"kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698576 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698594 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698613 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c687b8f8-b83c-492e-833b-e3fecb23fc93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698630 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c687b8f8-b83c-492e-833b-e3fecb23fc93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698647 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj67h\" (UniqueName: \"kubernetes.io/projected/c687b8f8-b83c-492e-833b-e3fecb23fc93-kube-api-access-vj67h\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-image-import-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcs7\" (UniqueName: \"kubernetes.io/projected/9a07572a-a5d5-41f6-9b62-5d608b529342-kube-api-access-5jcs7\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698694 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698713 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxj2\" (UniqueName: \"kubernetes.io/projected/dd17f745-4fce-4459-a068-94313e612723-kube-api-access-mmxj2\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-trusted-ca\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698792 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698809 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvkk\" (UniqueName: \"kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-serving-cert\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698842 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-metrics-tls\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698858 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698875 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698900 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-service-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698920 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698935 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698952 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81f400f6-9912-4f4a-9d98-877c303a60ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rsgb\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-kube-api-access-9rsgb\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.698994 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-audit-policies\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699027 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699043 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/81f400f6-9912-4f4a-9d98-877c303a60ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699059 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699081 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699096 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-encryption-config\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699113 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd17f745-4fce-4459-a068-94313e612723-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699126 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-456qk"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699127 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9m8m\" (UniqueName: \"kubernetes.io/projected/c58d61da-f37b-4616-9289-adebfd1890ae-kube-api-access-h9m8m\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.699196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit-dir\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700605 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700625 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22mdp\" (UniqueName: \"kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-node-pullsecrets\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700711 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-client\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700741 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700774 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/754666ad-657a-4e98-8feb-47f74d33dc3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700791 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700808 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-serving-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-encryption-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700876 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-config\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700893 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700936 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700954 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700970 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed642e8b-2ae6-4db9-a035-0a568c32c47b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.700984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed642e8b-2ae6-4db9-a035-0a568c32c47b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.701003 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.701023 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhtr\" (UniqueName: \"kubernetes.io/projected/955cac3b-f6ea-4573-adc7-5271ead0cf37-kube-api-access-dbhtr\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.701039 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d61da-f37b-4616-9289-adebfd1890ae-serving-cert\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.701057 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-auth-proxy-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.701491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.703063 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.703091 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qsq94"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.703100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.706519 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.706549 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.706560 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.709976 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.710008 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rqlh4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.710021 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552810-wv88x"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.713022 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c687b8f8-b83c-492e-833b-e3fecb23fc93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.713834 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dk9rs"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.713836 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-service-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.714089 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-image-import-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.714502 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.714709 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kvtz7"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.715159 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.715244 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.715890 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.719562 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.719841 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-audit-dir\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.720309 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.720370 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.720817 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.720899 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.720913 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.721056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-auth-proxy-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-serving-ca\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722250 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f5cbf703-ecdc-42d3-b313-f69ec71399fa-node-pullsecrets\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-etcd-client\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722404 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-config\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.722472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-service-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.723006 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.723893 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c687b8f8-b83c-492e-833b-e3fecb23fc93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.724176 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a07572a-a5d5-41f6-9b62-5d608b529342-config\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.724210 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jbmn2"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.725060 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.725204 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kvtz7"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.750417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a07572a-a5d5-41f6-9b62-5d608b529342-serving-cert\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.750948 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/955cac3b-f6ea-4573-adc7-5271ead0cf37-machine-approver-tls\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.751345 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-serving-cert\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.751390 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-encryption-config\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.751425 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.751810 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5cbf703-ecdc-42d3-b313-f69ec71399fa-etcd-client\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.751872 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.754080 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dk9rs"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.755162 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.762528 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.765648 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.774915 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-md6zd"] Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.776464 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.787136 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.805153 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823708 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823758 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-config\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-config\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823836 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823854 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgwl\" (UniqueName: \"kubernetes.io/projected/48290e46-c619-4673-86b7-bf96f136d693-kube-api-access-rdgwl\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823878 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754666ad-657a-4e98-8feb-47f74d33dc3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9phb\" (UniqueName: \"kubernetes.io/projected/ed642e8b-2ae6-4db9-a035-0a568c32c47b-kube-api-access-p9phb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a8e67548-adc0-4c5c-8115-efff12bad9ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823938 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8j2\" (UniqueName: \"kubernetes.io/projected/a8e67548-adc0-4c5c-8115-efff12bad9ae-kube-api-access-nj8j2\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprcp\" (UniqueName: \"kubernetes.io/projected/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-kube-api-access-fprcp\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823969 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.823990 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tt92\" (UniqueName: \"kubernetes.io/projected/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-kube-api-access-5tt92\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824022 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824039 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824056 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-images\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824072 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e67548-adc0-4c5c-8115-efff12bad9ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824089 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48290e46-c619-4673-86b7-bf96f136d693-audit-dir\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824107 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdtx\" (UniqueName: \"kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824136 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxj2\" (UniqueName: \"kubernetes.io/projected/dd17f745-4fce-4459-a068-94313e612723-kube-api-access-mmxj2\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824170 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-trusted-ca\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824187 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-serving-cert\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824240 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-metrics-tls\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824303 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81f400f6-9912-4f4a-9d98-877c303a60ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824319 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rsgb\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-kube-api-access-9rsgb\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824343 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-audit-policies\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/81f400f6-9912-4f4a-9d98-877c303a60ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824392 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824413 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-encryption-config\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824427 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd17f745-4fce-4459-a068-94313e612723-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824443 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9m8m\" (UniqueName: \"kubernetes.io/projected/c58d61da-f37b-4616-9289-adebfd1890ae-kube-api-access-h9m8m\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824458 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824488 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824505 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22mdp\" (UniqueName: \"kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-client\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824570 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/754666ad-657a-4e98-8feb-47f74d33dc3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824585 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824583 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-config\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824602 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-config\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824636 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824653 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824669 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed642e8b-2ae6-4db9-a035-0a568c32c47b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824685 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed642e8b-2ae6-4db9-a035-0a568c32c47b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824711 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d61da-f37b-4616-9289-adebfd1890ae-serving-cert\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824748 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-etcd-client\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824781 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-service-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824785 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.824798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825091 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a8e67548-adc0-4c5c-8115-efff12bad9ae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825112 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-serving-cert\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825158 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgp8h\" (UniqueName: \"kubernetes.io/projected/d4ea6578-d8bd-4dbc-baf4-4e6210419574-kube-api-access-rgp8h\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825197 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825235 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zz2\" (UniqueName: \"kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh48n\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-kube-api-access-wh48n\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825318 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825403 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825529 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.825808 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c58d61da-f37b-4616-9289-adebfd1890ae-trusted-ca\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.826489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48290e46-c619-4673-86b7-bf96f136d693-audit-dir\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.826497 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827011 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827472 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827599 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-images\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.827911 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.828130 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-config\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.826333 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829140 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829348 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829651 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81f400f6-9912-4f4a-9d98-877c303a60ec-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829702 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8e67548-adc0-4c5c-8115-efff12bad9ae-serving-cert\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.829971 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed642e8b-2ae6-4db9-a035-0a568c32c47b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.830023 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.830040 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.830166 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd17f745-4fce-4459-a068-94313e612723-config\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.830533 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-service-ca\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.830982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-metrics-tls\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.831373 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-audit-policies\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.831631 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48290e46-c619-4673-86b7-bf96f136d693-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.832079 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.833828 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.833930 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-serving-cert\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-encryption-config\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834147 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834231 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4ea6578-d8bd-4dbc-baf4-4e6210419574-etcd-client\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834277 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834342 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-serving-cert\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834370 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834470 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.834599 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.835689 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c58d61da-f37b-4616-9289-adebfd1890ae-serving-cert\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.836909 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed642e8b-2ae6-4db9-a035-0a568c32c47b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.836915 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/81f400f6-9912-4f4a-9d98-877c303a60ec-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.837088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48290e46-c619-4673-86b7-bf96f136d693-etcd-client\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.837100 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.837593 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd17f745-4fce-4459-a068-94313e612723-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.845749 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.865291 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.866294 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.885915 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.910817 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.917162 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754666ad-657a-4e98-8feb-47f74d33dc3c-trusted-ca\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.925920 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.945095 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.951138 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/754666ad-657a-4e98-8feb-47f74d33dc3c-metrics-tls\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.958236 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.965776 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 10 18:51:38 crc kubenswrapper[4861]: I0310 18:51:38.984570 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.025210 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.045327 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.065114 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.085059 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.104777 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.125688 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.145195 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.165846 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.185673 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.204900 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.226375 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.245171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.265318 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.285625 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.305393 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.325420 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.346372 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.376222 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.386333 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.406080 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.425421 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.445812 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.465399 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.485187 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.505751 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.525595 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.545169 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.565903 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.585029 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.605661 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.624370 4861 request.go:700] Waited for 1.000224258s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.626254 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.645960 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.666323 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.685368 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.706395 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: E0310 18:51:39.716294 4861 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 10 18:51:39 crc kubenswrapper[4861]: E0310 18:51:39.716407 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config podName:955cac3b-f6ea-4573-adc7-5271ead0cf37 nodeName:}" failed. No retries permitted until 2026-03-10 18:51:40.216377425 +0000 UTC m=+243.979813425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config") pod "machine-approver-56656f9798-4s2vr" (UID: "955cac3b-f6ea-4573-adc7-5271ead0cf37") : failed to sync configmap cache: timed out waiting for the condition Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.725518 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.747526 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.765630 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.785577 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.805411 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.826123 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.868323 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.885333 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.905415 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.925985 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.945334 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.965854 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 10 18:51:39 crc kubenswrapper[4861]: I0310 18:51:39.985222 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.005474 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.026835 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.046624 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.065560 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.085791 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.105396 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.126346 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.145981 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.166502 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.185546 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.205757 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.226330 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.245331 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.245987 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.266535 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.285683 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.306478 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.325748 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.345793 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.366375 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.413433 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj67h\" (UniqueName: \"kubernetes.io/projected/c687b8f8-b83c-492e-833b-e3fecb23fc93-kube-api-access-vj67h\") pod \"openshift-apiserver-operator-796bbdcf4f-9pckl\" (UID: \"c687b8f8-b83c-492e-833b-e3fecb23fc93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.434873 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcs7\" (UniqueName: \"kubernetes.io/projected/9a07572a-a5d5-41f6-9b62-5d608b529342-kube-api-access-5jcs7\") pod \"authentication-operator-69f744f599-cffh2\" (UID: \"9a07572a-a5d5-41f6-9b62-5d608b529342\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.448644 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.454022 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvkk\" (UniqueName: \"kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk\") pod \"controller-manager-879f6c89f-tpnft\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.468498 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.485851 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.506649 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.526585 4861 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.546816 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.585927 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.588669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhtr\" (UniqueName: \"kubernetes.io/projected/955cac3b-f6ea-4573-adc7-5271ead0cf37-kube-api-access-dbhtr\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.606382 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.617197 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndsl\" (UniqueName: \"kubernetes.io/projected/f5cbf703-ecdc-42d3-b313-f69ec71399fa-kube-api-access-2ndsl\") pod \"apiserver-76f77b778f-4d9gw\" (UID: \"f5cbf703-ecdc-42d3-b313-f69ec71399fa\") " pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.618069 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.627142 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.644181 4861 request.go:700] Waited for 1.867448026s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.648200 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.662871 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.680033 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.695970 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tt92\" (UniqueName: \"kubernetes.io/projected/78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c-kube-api-access-5tt92\") pod \"dns-operator-744455d44c-rqlh4\" (UID: \"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.732524 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rsgb\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-kube-api-access-9rsgb\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.749807 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxj2\" (UniqueName: \"kubernetes.io/projected/dd17f745-4fce-4459-a068-94313e612723-kube-api-access-mmxj2\") pod \"machine-api-operator-5694c8668f-gwxrl\" (UID: \"dd17f745-4fce-4459-a068-94313e612723\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.753643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgwl\" (UniqueName: \"kubernetes.io/projected/48290e46-c619-4673-86b7-bf96f136d693-kube-api-access-rdgwl\") pod \"apiserver-7bbb656c7d-5qqcg\" (UID: \"48290e46-c619-4673-86b7-bf96f136d693\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.760362 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.786358 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9phb\" (UniqueName: \"kubernetes.io/projected/ed642e8b-2ae6-4db9-a035-0a568c32c47b-kube-api-access-p9phb\") pod \"openshift-controller-manager-operator-756b6f6bc6-mb9ln\" (UID: \"ed642e8b-2ae6-4db9-a035-0a568c32c47b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.790575 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgp8h\" (UniqueName: \"kubernetes.io/projected/d4ea6578-d8bd-4dbc-baf4-4e6210419574-kube-api-access-rgp8h\") pod \"etcd-operator-b45778765-cqm4t\" (UID: \"d4ea6578-d8bd-4dbc-baf4-4e6210419574\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.805077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprcp\" (UniqueName: \"kubernetes.io/projected/7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1-kube-api-access-fprcp\") pod \"cluster-samples-operator-665b6dd947-mdswb\" (UID: \"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.824670 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8j2\" (UniqueName: \"kubernetes.io/projected/a8e67548-adc0-4c5c-8115-efff12bad9ae-kube-api-access-nj8j2\") pod \"openshift-config-operator-7777fb866f-jbxs4\" (UID: \"a8e67548-adc0-4c5c-8115-efff12bad9ae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.844733 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdtx\" (UniqueName: \"kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx\") pod \"route-controller-manager-6576b87f9c-qm57d\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.859136 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.862773 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/754666ad-657a-4e98-8feb-47f74d33dc3c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b4s99\" (UID: \"754666ad-657a-4e98-8feb-47f74d33dc3c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.876060 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.882212 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.884671 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zz2\" (UniqueName: \"kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2\") pod \"oauth-openshift-558db77b4-864wb\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.895405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.902982 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.906622 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh48n\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-kube-api-access-wh48n\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.925697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9m8m\" (UniqueName: \"kubernetes.io/projected/c58d61da-f37b-4616-9289-adebfd1890ae-kube-api-access-h9m8m\") pod \"console-operator-58897d9998-gsv8s\" (UID: \"c58d61da-f37b-4616-9289-adebfd1890ae\") " pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.960753 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22mdp\" (UniqueName: \"kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp\") pod \"console-f9d7485db-zg9g7\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.966415 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.967246 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81f400f6-9912-4f4a-9d98-877c303a60ec-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2fnd\" (UID: \"81f400f6-9912-4f4a-9d98-877c303a60ec\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.967861 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:40 crc kubenswrapper[4861]: I0310 18:51:40.986635 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.000544 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.012188 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.012933 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.025427 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.027008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/955cac3b-f6ea-4573-adc7-5271ead0cf37-config\") pod \"machine-approver-56656f9798-4s2vr\" (UID: \"955cac3b-f6ea-4573-adc7-5271ead0cf37\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.039898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060699 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060744 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060808 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060827 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7g9\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060872 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060930 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjj9\" (UniqueName: \"kubernetes.io/projected/044ff649-6d8e-4b0b-bfaa-8018e00e105d-kube-api-access-vjjj9\") pod \"downloads-7954f5f757-qsq94\" (UID: \"044ff649-6d8e-4b0b-bfaa-8018e00e105d\") " pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.060970 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.062173 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.562155398 +0000 UTC m=+245.325591358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: W0310 18:51:41.070554 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48290e46_c619_4673_86b7_bf96f136d693.slice/crio-48379805d1291850c7b987bb7fe5e0d69caddb9463f5b7e2c2dac1f4d903fb28 WatchSource:0}: Error finding container 48379805d1291850c7b987bb7fe5e0d69caddb9463f5b7e2c2dac1f4d903fb28: Status 404 returned error can't find the container with id 48379805d1291850c7b987bb7fe5e0d69caddb9463f5b7e2c2dac1f4d903fb28 Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.072327 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.079224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.087589 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.089505 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.090540 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4d9gw"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.112579 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.119667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" event={"ID":"2b5b4c87-f6e3-4523-837d-2f45ad711489","Type":"ContainerStarted","Data":"0ead3673874a61bcc645b41a1aa80e929d586108a8d7736ba15b27807c410131"} Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.138451 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cffh2"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.142605 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" event={"ID":"48290e46-c619-4673-86b7-bf96f136d693","Type":"ContainerStarted","Data":"48379805d1291850c7b987bb7fe5e0d69caddb9463f5b7e2c2dac1f4d903fb28"} Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.153808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" event={"ID":"c687b8f8-b83c-492e-833b-e3fecb23fc93","Type":"ContainerStarted","Data":"018e5261e825e35f1bda20329d2a207ed73f2904e3d1741bb3a32b39c6b2eb1a"} Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.162737 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562c8783-06a5-4205-ab38-9aabc25c1033-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.163078 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.663037623 +0000 UTC m=+245.426473583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163134 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163200 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-default-certificate\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163217 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-srv-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163260 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-srv-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163307 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7wl\" (UniqueName: \"kubernetes.io/projected/f7d1fb37-29a9-4548-9054-daa43301c56d-kube-api-access-7w7wl\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-mountpoint-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163356 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163374 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32e38d22-7f4c-4951-8eb2-befefca67916-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163409 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f01993c1-5b0e-4dd2-a98d-8685e13b474c-proxy-tls\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163425 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/562c8783-06a5-4205-ab38-9aabc25c1033-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/956b8171-6837-47bf-9f44-7f9cc43a1e30-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163496 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p7g9\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163512 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrp2\" (UniqueName: \"kubernetes.io/projected/ea181c37-7166-4db5-92b4-9321f06c0323-kube-api-access-jlrp2\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163532 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163555 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163572 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1e95be-62b6-4ab4-b526-f9482e74ed23-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163586 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-metrics-certs\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163603 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163622 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f01993c1-5b0e-4dd2-a98d-8685e13b474c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163668 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea181c37-7166-4db5-92b4-9321f06c0323-serving-cert\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163685 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6v2\" (UniqueName: \"kubernetes.io/projected/07ef9fc2-e757-48ca-b8aa-c97224434044-kube-api-access-4k6v2\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163699 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-cabundle\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163739 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/191bb018-6a2d-44a5-adf6-121673d85eb7-tmpfs\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjj9\" (UniqueName: \"kubernetes.io/projected/044ff649-6d8e-4b0b-bfaa-8018e00e105d-kube-api-access-vjjj9\") pod \"downloads-7954f5f757-qsq94\" (UID: \"044ff649-6d8e-4b0b-bfaa-8018e00e105d\") " pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163814 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns55x\" (UniqueName: \"kubernetes.io/projected/f6f89a90-023b-4b69-8848-702a50ab522c-kube-api-access-ns55x\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163829 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8c89f84-e0e2-4259-8245-1410699e2b6c-config-volume\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163845 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnf9\" (UniqueName: \"kubernetes.io/projected/0e1e95be-62b6-4ab4-b526-f9482e74ed23-kube-api-access-stnf9\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98pk\" (UniqueName: \"kubernetes.io/projected/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-kube-api-access-v98pk\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163938 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f453f4-2d5c-408c-8a19-74d979cd78c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163953 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1e95be-62b6-4ab4-b526-f9482e74ed23-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163970 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e38d22-7f4c-4951-8eb2-befefca67916-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.163984 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfl7\" (UniqueName: \"kubernetes.io/projected/d82c061b-6eea-49c1-8017-c401d3bd0f58-kube-api-access-9zfl7\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164000 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmm9c\" (UniqueName: \"kubernetes.io/projected/6922c8a2-7528-42fa-8df7-963b652296f8-kube-api-access-rmm9c\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-webhook-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164051 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4j2\" (UniqueName: \"kubernetes.io/projected/191bb018-6a2d-44a5-adf6-121673d85eb7-kube-api-access-8v4j2\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164078 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164092 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f453f4-2d5c-408c-8a19-74d979cd78c8-config\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8c89f84-e0e2-4259-8245-1410699e2b6c-metrics-tls\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164185 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f453f4-2d5c-408c-8a19-74d979cd78c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164211 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-service-ca-bundle\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164236 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-stats-auth\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164250 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6922c8a2-7528-42fa-8df7-963b652296f8-cert\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e38d22-7f4c-4951-8eb2-befefca67916-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164312 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-plugins-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d82c061b-6eea-49c1-8017-c401d3bd0f58-proxy-tls\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-images\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164403 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7bj\" (UniqueName: \"kubernetes.io/projected/d5401d63-54cf-4958-a402-125bbf5793f1-kube-api-access-2t7bj\") pod \"migrator-59844c95c7-9g768\" (UID: \"d5401d63-54cf-4958-a402-125bbf5793f1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164420 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9pz\" (UniqueName: \"kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz\") pod \"auto-csr-approver-29552810-wv88x\" (UID: \"19e950a0-6f71-44af-b995-f1ef1be6edbb\") " pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562c8783-06a5-4205-ab38-9aabc25c1033-config\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxj6\" (UniqueName: \"kubernetes.io/projected/8835b95e-842f-4c2f-ba78-970b2ef6749c-kube-api-access-qvxj6\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164479 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-247md\" (UniqueName: \"kubernetes.io/projected/b6781d90-ea76-44b0-b2eb-44641332f632-kube-api-access-247md\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164493 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthgc\" (UniqueName: \"kubernetes.io/projected/f8c89f84-e0e2-4259-8245-1410699e2b6c-kube-api-access-gthgc\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164509 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-certs\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164538 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw87j\" (UniqueName: \"kubernetes.io/projected/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-kube-api-access-hw87j\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164572 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-socket-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164616 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164642 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164658 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlkm\" (UniqueName: \"kubernetes.io/projected/b0b1942c-6cca-4d7f-8567-e5d340ee265f-kube-api-access-txlkm\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164694 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-profile-collector-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164779 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlgq\" (UniqueName: \"kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164798 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f6f89a90-023b-4b69-8848-702a50ab522c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.164819 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-csi-data-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdjq\" (UniqueName: \"kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-registration-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd92b\" (UniqueName: \"kubernetes.io/projected/956b8171-6837-47bf-9f44-7f9cc43a1e30-kube-api-access-wd92b\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165510 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165590 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea181c37-7166-4db5-92b4-9321f06c0323-config\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165611 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-node-bootstrap-token\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165627 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjq2r\" (UniqueName: \"kubernetes.io/projected/f01993c1-5b0e-4dd2-a98d-8685e13b474c-kube-api-access-zjq2r\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.165690 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-key\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.166595 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.66657914 +0000 UTC m=+245.430015100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.167419 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.173790 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.173961 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.174423 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.188972 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rqlh4"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.190030 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gwxrl"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.202473 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.202897 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.225241 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: W0310 18:51:41.225610 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a07572a_a5d5_41f6_9b62_5d608b529342.slice/crio-7cea7966e02ced6decff3850718d00a7a954bc932489caf1924a905a3cf01996 WatchSource:0}: Error finding container 7cea7966e02ced6decff3850718d00a7a954bc932489caf1924a905a3cf01996: Status 404 returned error can't find the container with id 7cea7966e02ced6decff3850718d00a7a954bc932489caf1924a905a3cf01996 Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.240896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjj9\" (UniqueName: \"kubernetes.io/projected/044ff649-6d8e-4b0b-bfaa-8018e00e105d-kube-api-access-vjjj9\") pod \"downloads-7954f5f757-qsq94\" (UID: \"044ff649-6d8e-4b0b-bfaa-8018e00e105d\") " pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.261259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p7g9\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267267 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267439 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea181c37-7166-4db5-92b4-9321f06c0323-config\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.267468 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.767436465 +0000 UTC m=+245.530872425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267513 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-node-bootstrap-token\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267577 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjq2r\" (UniqueName: \"kubernetes.io/projected/f01993c1-5b0e-4dd2-a98d-8685e13b474c-kube-api-access-zjq2r\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267605 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-key\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562c8783-06a5-4205-ab38-9aabc25c1033-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267652 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267677 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-default-certificate\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-srv-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267754 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-srv-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7wl\" (UniqueName: \"kubernetes.io/projected/f7d1fb37-29a9-4548-9054-daa43301c56d-kube-api-access-7w7wl\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267804 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-mountpoint-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32e38d22-7f4c-4951-8eb2-befefca67916-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267843 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267863 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f01993c1-5b0e-4dd2-a98d-8685e13b474c-proxy-tls\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/956b8171-6837-47bf-9f44-7f9cc43a1e30-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267934 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrp2\" (UniqueName: \"kubernetes.io/projected/ea181c37-7166-4db5-92b4-9321f06c0323-kube-api-access-jlrp2\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267952 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/562c8783-06a5-4205-ab38-9aabc25c1033-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.267977 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268003 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268030 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268048 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1e95be-62b6-4ab4-b526-f9482e74ed23-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea181c37-7166-4db5-92b4-9321f06c0323-config\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268063 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-metrics-certs\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268116 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea181c37-7166-4db5-92b4-9321f06c0323-serving-cert\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268136 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f01993c1-5b0e-4dd2-a98d-8685e13b474c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268156 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/191bb018-6a2d-44a5-adf6-121673d85eb7-tmpfs\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268172 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k6v2\" (UniqueName: \"kubernetes.io/projected/07ef9fc2-e757-48ca-b8aa-c97224434044-kube-api-access-4k6v2\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268189 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-cabundle\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268206 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns55x\" (UniqueName: \"kubernetes.io/projected/f6f89a90-023b-4b69-8848-702a50ab522c-kube-api-access-ns55x\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268262 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8c89f84-e0e2-4259-8245-1410699e2b6c-config-volume\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268279 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnf9\" (UniqueName: \"kubernetes.io/projected/0e1e95be-62b6-4ab4-b526-f9482e74ed23-kube-api-access-stnf9\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268297 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98pk\" (UniqueName: \"kubernetes.io/projected/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-kube-api-access-v98pk\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268315 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f453f4-2d5c-408c-8a19-74d979cd78c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268333 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1e95be-62b6-4ab4-b526-f9482e74ed23-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268328 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-mountpoint-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.268350 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e38d22-7f4c-4951-8eb2-befefca67916-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272420 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfl7\" (UniqueName: \"kubernetes.io/projected/d82c061b-6eea-49c1-8017-c401d3bd0f58-kube-api-access-9zfl7\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272459 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmm9c\" (UniqueName: \"kubernetes.io/projected/6922c8a2-7528-42fa-8df7-963b652296f8-kube-api-access-rmm9c\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272480 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v4j2\" (UniqueName: \"kubernetes.io/projected/191bb018-6a2d-44a5-adf6-121673d85eb7-kube-api-access-8v4j2\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272503 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f453f4-2d5c-408c-8a19-74d979cd78c8-config\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272522 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272542 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-metrics-certs\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-webhook-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272597 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f453f4-2d5c-408c-8a19-74d979cd78c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272614 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8c89f84-e0e2-4259-8245-1410699e2b6c-metrics-tls\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272632 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-service-ca-bundle\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-stats-auth\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272664 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6922c8a2-7528-42fa-8df7-963b652296f8-cert\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272683 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e38d22-7f4c-4951-8eb2-befefca67916-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272744 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-plugins-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d82c061b-6eea-49c1-8017-c401d3bd0f58-proxy-tls\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-images\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272827 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7bj\" (UniqueName: \"kubernetes.io/projected/d5401d63-54cf-4958-a402-125bbf5793f1-kube-api-access-2t7bj\") pod \"migrator-59844c95c7-9g768\" (UID: \"d5401d63-54cf-4958-a402-125bbf5793f1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272846 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9pz\" (UniqueName: \"kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz\") pod \"auto-csr-approver-29552810-wv88x\" (UID: \"19e950a0-6f71-44af-b995-f1ef1be6edbb\") " pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272868 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562c8783-06a5-4205-ab38-9aabc25c1033-config\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272887 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxj6\" (UniqueName: \"kubernetes.io/projected/8835b95e-842f-4c2f-ba78-970b2ef6749c-kube-api-access-qvxj6\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272916 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-247md\" (UniqueName: \"kubernetes.io/projected/b6781d90-ea76-44b0-b2eb-44641332f632-kube-api-access-247md\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272932 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthgc\" (UniqueName: \"kubernetes.io/projected/f8c89f84-e0e2-4259-8245-1410699e2b6c-kube-api-access-gthgc\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw87j\" (UniqueName: \"kubernetes.io/projected/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-kube-api-access-hw87j\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272966 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-certs\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.272987 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-socket-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273002 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273022 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txlkm\" (UniqueName: \"kubernetes.io/projected/b0b1942c-6cca-4d7f-8567-e5d340ee265f-kube-api-access-txlkm\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273039 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgq\" (UniqueName: \"kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f6f89a90-023b-4b69-8848-702a50ab522c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273070 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-profile-collector-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273088 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-csi-data-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273107 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdjq\" (UniqueName: \"kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-registration-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.273143 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd92b\" (UniqueName: \"kubernetes.io/projected/956b8171-6837-47bf-9f44-7f9cc43a1e30-kube-api-access-wd92b\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.274025 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.274309 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.274673 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.774664741 +0000 UTC m=+245.538100701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.274977 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.275558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.276055 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4f453f4-2d5c-408c-8a19-74d979cd78c8-config\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.279497 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e1e95be-62b6-4ab4-b526-f9482e74ed23-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.282317 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f01993c1-5b0e-4dd2-a98d-8685e13b474c-proxy-tls\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.283973 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562c8783-06a5-4205-ab38-9aabc25c1033-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.284086 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-plugins-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.284702 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-service-ca-bundle\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.285091 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-socket-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.285506 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-default-certificate\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.285639 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-node-bootstrap-token\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286332 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-cabundle\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286533 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-csi-data-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286619 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6781d90-ea76-44b0-b2eb-44641332f632-registration-dir\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286622 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/191bb018-6a2d-44a5-adf6-121673d85eb7-tmpfs\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286841 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562c8783-06a5-4205-ab38-9aabc25c1033-config\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.286923 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b1942c-6cca-4d7f-8567-e5d340ee265f-srv-cert\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.287105 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e38d22-7f4c-4951-8eb2-befefca67916-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.287107 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f01993c1-5b0e-4dd2-a98d-8685e13b474c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.288082 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.288168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e38d22-7f4c-4951-8eb2-befefca67916-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.292624 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.295396 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-profile-collector-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.300838 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-stats-auth\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.301110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e1e95be-62b6-4ab4-b526-f9482e74ed23-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.301182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d82c061b-6eea-49c1-8017-c401d3bd0f58-proxy-tls\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.304080 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cqm4t"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.304417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8835b95e-842f-4c2f-ba78-970b2ef6749c-signing-key\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.305463 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191bb018-6a2d-44a5-adf6-121673d85eb7-webhook-cert\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.305748 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6922c8a2-7528-42fa-8df7-963b652296f8-cert\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.306445 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8c89f84-e0e2-4259-8245-1410699e2b6c-metrics-tls\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.307060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea181c37-7166-4db5-92b4-9321f06c0323-serving-cert\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.307147 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f6f89a90-023b-4b69-8848-702a50ab522c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.307483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.307802 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8c89f84-e0e2-4259-8245-1410699e2b6c-config-volume\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.308259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.308824 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d82c061b-6eea-49c1-8017-c401d3bd0f58-images\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.309114 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7d1fb37-29a9-4548-9054-daa43301c56d-certs\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.310240 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/956b8171-6837-47bf-9f44-7f9cc43a1e30-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.312268 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4f453f4-2d5c-408c-8a19-74d979cd78c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.314568 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07ef9fc2-e757-48ca-b8aa-c97224434044-srv-cert\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.323564 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrp2\" (UniqueName: \"kubernetes.io/projected/ea181c37-7166-4db5-92b4-9321f06c0323-kube-api-access-jlrp2\") pod \"service-ca-operator-777779d784-jbgkh\" (UID: \"ea181c37-7166-4db5-92b4-9321f06c0323\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.342974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32e38d22-7f4c-4951-8eb2-befefca67916-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-525cw\" (UID: \"32e38d22-7f4c-4951-8eb2-befefca67916\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.343041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.348646 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gsv8s"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.360250 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjq2r\" (UniqueName: \"kubernetes.io/projected/f01993c1-5b0e-4dd2-a98d-8685e13b474c-kube-api-access-zjq2r\") pod \"machine-config-controller-84d6567774-8wcvl\" (UID: \"f01993c1-5b0e-4dd2-a98d-8685e13b474c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.374231 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.374855 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.874838185 +0000 UTC m=+245.638274145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.403389 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd92b\" (UniqueName: \"kubernetes.io/projected/956b8171-6837-47bf-9f44-7f9cc43a1e30-kube-api-access-wd92b\") pod \"control-plane-machine-set-operator-78cbb6b69f-dwxk9\" (UID: \"956b8171-6837-47bf-9f44-7f9cc43a1e30\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.405325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/562c8783-06a5-4205-ab38-9aabc25c1033-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x2llm\" (UID: \"562c8783-06a5-4205-ab38-9aabc25c1033\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.416613 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-864wb"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.435681 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7wl\" (UniqueName: \"kubernetes.io/projected/f7d1fb37-29a9-4548-9054-daa43301c56d-kube-api-access-7w7wl\") pod \"machine-config-server-md6zd\" (UID: \"f7d1fb37-29a9-4548-9054-daa43301c56d\") " pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.439795 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmm9c\" (UniqueName: \"kubernetes.io/projected/6922c8a2-7528-42fa-8df7-963b652296f8-kube-api-access-rmm9c\") pod \"ingress-canary-jbmn2\" (UID: \"6922c8a2-7528-42fa-8df7-963b652296f8\") " pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.453671 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.482414 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfl7\" (UniqueName: \"kubernetes.io/projected/d82c061b-6eea-49c1-8017-c401d3bd0f58-kube-api-access-9zfl7\") pod \"machine-config-operator-74547568cd-zgxhw\" (UID: \"d82c061b-6eea-49c1-8017-c401d3bd0f58\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.482520 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.482948 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:41.982937606 +0000 UTC m=+245.746373566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.486968 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxj6\" (UniqueName: \"kubernetes.io/projected/8835b95e-842f-4c2f-ba78-970b2ef6749c-kube-api-access-qvxj6\") pod \"service-ca-9c57cc56f-t52tz\" (UID: \"8835b95e-842f-4c2f-ba78-970b2ef6749c\") " pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.490448 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.490550 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.505727 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-247md\" (UniqueName: \"kubernetes.io/projected/b6781d90-ea76-44b0-b2eb-44641332f632-kube-api-access-247md\") pod \"csi-hostpathplugin-dk9rs\" (UID: \"b6781d90-ea76-44b0-b2eb-44641332f632\") " pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.518151 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.524054 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.531785 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthgc\" (UniqueName: \"kubernetes.io/projected/f8c89f84-e0e2-4259-8245-1410699e2b6c-kube-api-access-gthgc\") pod \"dns-default-kvtz7\" (UID: \"f8c89f84-e0e2-4259-8245-1410699e2b6c\") " pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.540419 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw87j\" (UniqueName: \"kubernetes.io/projected/592a6d7a-6cf4-4a23-bc7d-2444b6387faa-kube-api-access-hw87j\") pod \"package-server-manager-789f6589d5-9x9tv\" (UID: \"592a6d7a-6cf4-4a23-bc7d-2444b6387faa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: W0310 18:51:41.557201 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod754666ad_657a_4e98_8feb_47f74d33dc3c.slice/crio-0816d0187ef841060c96dd9b74dbe417a57e6b480e1f9e2e6c4834f61b5d66ec WatchSource:0}: Error finding container 0816d0187ef841060c96dd9b74dbe417a57e6b480e1f9e2e6c4834f61b5d66ec: Status 404 returned error can't find the container with id 0816d0187ef841060c96dd9b74dbe417a57e6b480e1f9e2e6c4834f61b5d66ec Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.560098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v4j2\" (UniqueName: \"kubernetes.io/projected/191bb018-6a2d-44a5-adf6-121673d85eb7-kube-api-access-8v4j2\") pod \"packageserver-d55dfcdfc-nmc5h\" (UID: \"191bb018-6a2d-44a5-adf6-121673d85eb7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.569382 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.582189 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.583268 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.583625 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.083605748 +0000 UTC m=+245.847041708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.583931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4f453f4-2d5c-408c-8a19-74d979cd78c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9dcxh\" (UID: \"d4f453f4-2d5c-408c-8a19-74d979cd78c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.588868 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.603597 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.606179 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.629402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9pz\" (UniqueName: \"kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz\") pod \"auto-csr-approver-29552810-wv88x\" (UID: \"19e950a0-6f71-44af-b995-f1ef1be6edbb\") " pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.631272 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns55x\" (UniqueName: \"kubernetes.io/projected/f6f89a90-023b-4b69-8848-702a50ab522c-kube-api-access-ns55x\") pod \"multus-admission-controller-857f4d67dd-456qk\" (UID: \"f6f89a90-023b-4b69-8848-702a50ab522c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.640281 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlkm\" (UniqueName: \"kubernetes.io/projected/b0b1942c-6cca-4d7f-8567-e5d340ee265f-kube-api-access-txlkm\") pod \"olm-operator-6b444d44fb-2m858\" (UID: \"b0b1942c-6cca-4d7f-8567-e5d340ee265f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.648085 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.659596 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.669407 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlgq\" (UniqueName: \"kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq\") pod \"marketplace-operator-79b997595-7qnc2\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.671859 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.677666 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.677943 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.684630 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.685238 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.185225165 +0000 UTC m=+245.948661125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.685635 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jbmn2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.686687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdjq\" (UniqueName: \"kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq\") pod \"collect-profiles-29552805-vbm4k\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.693187 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.702687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k6v2\" (UniqueName: \"kubernetes.io/projected/07ef9fc2-e757-48ca-b8aa-c97224434044-kube-api-access-4k6v2\") pod \"catalog-operator-68c6474976-9khgp\" (UID: \"07ef9fc2-e757-48ca-b8aa-c97224434044\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.702983 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.719091 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-md6zd" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.723748 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7bj\" (UniqueName: \"kubernetes.io/projected/d5401d63-54cf-4958-a402-125bbf5793f1-kube-api-access-2t7bj\") pod \"migrator-59844c95c7-9g768\" (UID: \"d5401d63-54cf-4958-a402-125bbf5793f1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.738688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98pk\" (UniqueName: \"kubernetes.io/projected/cf512bc0-beb2-4784-ac61-7f7a22ccc3e9-kube-api-access-v98pk\") pod \"router-default-5444994796-rzhxp\" (UID: \"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9\") " pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.760781 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnf9\" (UniqueName: \"kubernetes.io/projected/0e1e95be-62b6-4ab4-b526-f9482e74ed23-kube-api-access-stnf9\") pod \"kube-storage-version-migrator-operator-b67b599dd-qm6j4\" (UID: \"0e1e95be-62b6-4ab4-b526-f9482e74ed23\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.785315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.785641 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.285610453 +0000 UTC m=+246.049046423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.785935 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.786327 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.286314224 +0000 UTC m=+246.049750184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.818264 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.821915 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.865109 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.867059 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.878289 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.887082 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.893572 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.894020 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.393985729 +0000 UTC m=+246.157421689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.896582 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.918076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.925616 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.930260 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.932331 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qsq94"] Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.933798 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.962405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" Mar 10 18:51:41 crc kubenswrapper[4861]: I0310 18:51:41.995081 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:41 crc kubenswrapper[4861]: E0310 18:51:41.995470 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.495439054 +0000 UTC m=+246.258875014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.029788 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.057832 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.097215 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.097546 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.597531588 +0000 UTC m=+246.360967548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.124691 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.172571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" event={"ID":"ccb02870-5d18-43c9-950d-042c52c092c3","Type":"ContainerStarted","Data":"facbc50c32118d37900f8b4f740a799c217914f3552f90d4754de39215845681"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.180901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" event={"ID":"9a07572a-a5d5-41f6-9b62-5d608b529342","Type":"ContainerStarted","Data":"e950aa88cce75d027b20ec401367de68292d86ade5e1034f906e89d1524d3898"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.180945 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" event={"ID":"9a07572a-a5d5-41f6-9b62-5d608b529342","Type":"ContainerStarted","Data":"7cea7966e02ced6decff3850718d00a7a954bc932489caf1924a905a3cf01996"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.185333 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" event={"ID":"9896da3e-4505-4ced-b1e7-cd47a951971e","Type":"ContainerStarted","Data":"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.185375 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" event={"ID":"9896da3e-4505-4ced-b1e7-cd47a951971e","Type":"ContainerStarted","Data":"73662ea43c248dae4b4a08247d49c89b349e189032968b75f059e9140e2e3df5"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.186242 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.195479 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" event={"ID":"956b8171-6837-47bf-9f44-7f9cc43a1e30","Type":"ContainerStarted","Data":"bbe5938538adc2aad67e0fa5d724ce88042bab8e0e78c3125adbc162aa4437f8"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.197811 4861 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qm57d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.197872 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.199685 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" event={"ID":"c58d61da-f37b-4616-9289-adebfd1890ae","Type":"ContainerStarted","Data":"ed838d7167fce3a2f6db95d16f202c23d732d5e758f7d69607770408ed0266a4"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.199753 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" event={"ID":"c58d61da-f37b-4616-9289-adebfd1890ae","Type":"ContainerStarted","Data":"92615aac6734d8d39bae7c9784f529f3e813f8649bbb2e8b5dba15edddce118c"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.200141 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.206232 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" event={"ID":"754666ad-657a-4e98-8feb-47f74d33dc3c","Type":"ContainerStarted","Data":"0816d0187ef841060c96dd9b74dbe417a57e6b480e1f9e2e6c4834f61b5d66ec"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.208492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.209011 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.708993583 +0000 UTC m=+246.472429543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.211185 4861 patch_prober.go:28] interesting pod/console-operator-58897d9998-gsv8s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.211217 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" podUID="c58d61da-f37b-4616-9289-adebfd1890ae" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.214682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" event={"ID":"81f400f6-9912-4f4a-9d98-877c303a60ec","Type":"ContainerStarted","Data":"364e5d4b24fd3c7c990ea51060a6dcce6a4f25b48d7070605afd4a088913faac"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.224681 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zg9g7" event={"ID":"3db8cb04-007c-48f9-a986-7a503ca1c077","Type":"ContainerStarted","Data":"0dd392ccb57f2021dad9ccfdeb0473b3525c0082ab5f36fae8bf1056dd441631"} Mar 10 18:51:42 crc kubenswrapper[4861]: W0310 18:51:42.231024 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf01993c1_5b0e_4dd2_a98d_8685e13b474c.slice/crio-430ec32c11de98a9dca186f57488b4ad6da1aac22e2a31633f2d3b9a1c3f30d7 WatchSource:0}: Error finding container 430ec32c11de98a9dca186f57488b4ad6da1aac22e2a31633f2d3b9a1c3f30d7: Status 404 returned error can't find the container with id 430ec32c11de98a9dca186f57488b4ad6da1aac22e2a31633f2d3b9a1c3f30d7 Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.231688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" event={"ID":"d4ea6578-d8bd-4dbc-baf4-4e6210419574","Type":"ContainerStarted","Data":"b822dea1bdf23f591e406c9d1871b1a3df922f832f04cf8ca21d40f37b8463bb"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.231761 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" event={"ID":"d4ea6578-d8bd-4dbc-baf4-4e6210419574","Type":"ContainerStarted","Data":"e563a9ff45346491b4c66822ae1d949c75b59fb52df92ce0c0fea6e80bac3f69"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.236472 4861 generic.go:334] "Generic (PLEG): container finished" podID="f5cbf703-ecdc-42d3-b313-f69ec71399fa" containerID="f03ead3c7c33a49002ca98f50b1324d5ff2e41dadde6e97467189bd3b254678f" exitCode=0 Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.236587 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" event={"ID":"f5cbf703-ecdc-42d3-b313-f69ec71399fa","Type":"ContainerDied","Data":"f03ead3c7c33a49002ca98f50b1324d5ff2e41dadde6e97467189bd3b254678f"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.236673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" event={"ID":"f5cbf703-ecdc-42d3-b313-f69ec71399fa","Type":"ContainerStarted","Data":"ff614231a220474a0127c90e8567b73d3830bae8df7250cf8e592e06789db0f7"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.239754 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" event={"ID":"dd17f745-4fce-4459-a068-94313e612723","Type":"ContainerStarted","Data":"2cb3bdf0ab4b730baf1cad0f22fe169a892fba439e16a3b2098e1231407e6e5d"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.239938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" event={"ID":"dd17f745-4fce-4459-a068-94313e612723","Type":"ContainerStarted","Data":"3283d28ff2109a33118cb237a871956f81b7ee5f6af5f477166eb770f486d8bd"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.244897 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" event={"ID":"a8e67548-adc0-4c5c-8115-efff12bad9ae","Type":"ContainerStarted","Data":"cb7dcd7bb34e3e005d3362fc7034a738a1d0415deb9f2647c07f5761038860e1"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.248285 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" event={"ID":"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c","Type":"ContainerStarted","Data":"3f12f5591b124ba552380925741ba11212a4e64e464cb2133872aecda1e337ce"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.248320 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" event={"ID":"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c","Type":"ContainerStarted","Data":"763c5dbed57b92c627ea6cf72517070d8409fd42720527a6c163fb225419cae5"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.297834 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qsq94" event={"ID":"044ff649-6d8e-4b0b-bfaa-8018e00e105d","Type":"ContainerStarted","Data":"07c1e7c20604a3dfe03ccd835df96758f42bcf470c8dd5f8a7cb30f5a370dda7"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.304606 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" event={"ID":"ed642e8b-2ae6-4db9-a035-0a568c32c47b","Type":"ContainerStarted","Data":"690efe04f556509a89b4ac6422bebb4686905bd4a08cc581501bed6aea0e7c0c"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.307224 4861 generic.go:334] "Generic (PLEG): container finished" podID="48290e46-c619-4673-86b7-bf96f136d693" containerID="71e7f1152967fa7999f248897cf37267802632c3308604cbbc05f30d9e34832d" exitCode=0 Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.308870 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" event={"ID":"48290e46-c619-4673-86b7-bf96f136d693","Type":"ContainerDied","Data":"71e7f1152967fa7999f248897cf37267802632c3308604cbbc05f30d9e34832d"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.309245 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.311512 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.811494545 +0000 UTC m=+246.574930505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.328787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" event={"ID":"c687b8f8-b83c-492e-833b-e3fecb23fc93","Type":"ContainerStarted","Data":"44e6c32a8f0dbdd292e7a19c723116421e5ad1f93bbc7264f40a756e2541b032"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.339891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" event={"ID":"2b5b4c87-f6e3-4523-837d-2f45ad711489","Type":"ContainerStarted","Data":"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.340462 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.342534 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jbmn2"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.347387 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.357256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" event={"ID":"955cac3b-f6ea-4573-adc7-5271ead0cf37","Type":"ContainerStarted","Data":"52f54e566f198cbc426472ac71601a9b7aa162ff76672a3568e9b8fef239eb5a"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.357288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" event={"ID":"955cac3b-f6ea-4573-adc7-5271ead0cf37","Type":"ContainerStarted","Data":"bcdd2440a2b7fdb3cabd4017a65463498392b3ef953e91c5db8285a41f4133f8"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.361240 4861 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tpnft container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.361265 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.361459 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" event={"ID":"32e38d22-7f4c-4951-8eb2-befefca67916","Type":"ContainerStarted","Data":"18ab41449a29344ac874ff7e01bf3853c35f50b1a2f95c07d7319e0231c0b907"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.382794 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" event={"ID":"ea181c37-7166-4db5-92b4-9321f06c0323","Type":"ContainerStarted","Data":"e399e525035c47cb17bb6448c31ec873b7302711bf7dec834b07d35660366d7e"} Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.409685 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dk9rs"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.410281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.410322 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.411337 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:42.911320023 +0000 UTC m=+246.674755973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.411949 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.420740 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06e51d0-e817-41ac-9d69-3ef2099f8ba8-metrics-certs\") pod \"network-metrics-daemon-2rvxn\" (UID: \"c06e51d0-e817-41ac-9d69-3ef2099f8ba8\") " pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:42 crc kubenswrapper[4861]: W0310 18:51:42.443631 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf512bc0_beb2_4784_ac61_7f7a22ccc3e9.slice/crio-54c753895d83e2b2b3534bcb682432b1fef91127299d27adcf0ea056488e6064 WatchSource:0}: Error finding container 54c753895d83e2b2b3534bcb682432b1fef91127299d27adcf0ea056488e6064: Status 404 returned error can't find the container with id 54c753895d83e2b2b3534bcb682432b1fef91127299d27adcf0ea056488e6064 Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.513226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.514144 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.01411917 +0000 UTC m=+246.777555130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.532370 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t52tz"] Mar 10 18:51:42 crc kubenswrapper[4861]: W0310 18:51:42.545236 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f453f4_2d5c_408c_8a19_74d979cd78c8.slice/crio-3619b68446e2532ca14afb601a988c14263ace1ec96f97525a865974a5a631d6 WatchSource:0}: Error finding container 3619b68446e2532ca14afb601a988c14263ace1ec96f97525a865974a5a631d6: Status 404 returned error can't find the container with id 3619b68446e2532ca14afb601a988c14263ace1ec96f97525a865974a5a631d6 Mar 10 18:51:42 crc kubenswrapper[4861]: W0310 18:51:42.548251 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6922c8a2_7528_42fa_8df7_963b652296f8.slice/crio-6a82b1212868fef34d6ca63b768c824a2c65c68e3d82c9dde8b3ca6e4d9a1896 WatchSource:0}: Error finding container 6a82b1212868fef34d6ca63b768c824a2c65c68e3d82c9dde8b3ca6e4d9a1896: Status 404 returned error can't find the container with id 6a82b1212868fef34d6ca63b768c824a2c65c68e3d82c9dde8b3ca6e4d9a1896 Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.617508 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.618891 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.118877808 +0000 UTC m=+246.882313768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.620461 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kvtz7"] Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.625213 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2rvxn" Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.718613 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.719007 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.21899249 +0000 UTC m=+246.982428450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.820849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.821601 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.321587624 +0000 UTC m=+247.085023574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:42 crc kubenswrapper[4861]: I0310 18:51:42.922108 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:42 crc kubenswrapper[4861]: E0310 18:51:42.922392 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.422378278 +0000 UTC m=+247.185814238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: W0310 18:51:43.035374 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8c89f84_e0e2_4259_8245_1410699e2b6c.slice/crio-6d3aefb36da93d82ebc020e68788b32e553251915cbe0e7a7038b8715ed57452 WatchSource:0}: Error finding container 6d3aefb36da93d82ebc020e68788b32e553251915cbe0e7a7038b8715ed57452: Status 404 returned error can't find the container with id 6d3aefb36da93d82ebc020e68788b32e553251915cbe0e7a7038b8715ed57452 Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.037869 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.038331 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.538317044 +0000 UTC m=+247.301753014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.070456 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-456qk"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.157283 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.157860 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.657843249 +0000 UTC m=+247.421279209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.228301 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.228731 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.230673 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.246407 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.249402 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552810-wv88x"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.270180 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.271451 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.271882 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.771865985 +0000 UTC m=+247.535301945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.352641 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.372190 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.372411 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.872375555 +0000 UTC m=+247.635811515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.396697 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.399745 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.400837 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" podStartSLOduration=188.40081993 podStartE2EDuration="3m8.40081993s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.400386673 +0000 UTC m=+247.163822633" watchObservedRunningTime="2026-03-10 18:51:43.40081993 +0000 UTC m=+247.164255890" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.428481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" event={"ID":"dd17f745-4fce-4459-a068-94313e612723","Type":"ContainerStarted","Data":"55cdcc13f43da0c136b2f838969d47a33d6858cf8fe1c07d2c5cc5a402fa66f0"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.433041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.438172 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.440992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rzhxp" event={"ID":"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9","Type":"ContainerStarted","Data":"54c753895d83e2b2b3534bcb682432b1fef91127299d27adcf0ea056488e6064"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.462274 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" event={"ID":"191bb018-6a2d-44a5-adf6-121673d85eb7","Type":"ContainerStarted","Data":"12c17e2299f8d040c3927429f1855466d95d12d969a69dc4c6c85b63442e64f5"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.470379 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" podStartSLOduration=188.470360314 podStartE2EDuration="3m8.470360314s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.467662021 +0000 UTC m=+247.231097991" watchObservedRunningTime="2026-03-10 18:51:43.470360314 +0000 UTC m=+247.233796274" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.473338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.473632 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:43.973620706 +0000 UTC m=+247.737056666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.474558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-md6zd" event={"ID":"f7d1fb37-29a9-4548-9054-daa43301c56d","Type":"ContainerStarted","Data":"b7e3078bad84a91d1b9f1aa8a5ebdde7c7c3a090264486d0d545ecb834e9d807"} Mar 10 18:51:43 crc kubenswrapper[4861]: W0310 18:51:43.487904 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1b1590_e261_4e1f_9427_039f5a9b3db7.slice/crio-d6f7cac456be6dc86f1e9f71fa76263c16ddab968655d20bc54637796fee9de1 WatchSource:0}: Error finding container d6f7cac456be6dc86f1e9f71fa76263c16ddab968655d20bc54637796fee9de1: Status 404 returned error can't find the container with id d6f7cac456be6dc86f1e9f71fa76263c16ddab968655d20bc54637796fee9de1 Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.491445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" event={"ID":"b6781d90-ea76-44b0-b2eb-44641332f632","Type":"ContainerStarted","Data":"2de0bb82cf54c5203a3da89338b3830bbeb30cc49f8e1b3d1ef458395b311c5c"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.494962 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" event={"ID":"ccb02870-5d18-43c9-950d-042c52c092c3","Type":"ContainerStarted","Data":"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.495677 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.496881 4861 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-864wb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.496908 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.502156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" event={"ID":"f6f89a90-023b-4b69-8848-702a50ab522c","Type":"ContainerStarted","Data":"9c3d56f6625fd0fcf20ab92887ff1699cb6ef660f5b907a2503f50c9ebb6a533"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.517127 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8e67548-adc0-4c5c-8115-efff12bad9ae" containerID="408c1d87547ceb8425b3d51908d4dcfec3c10eb40aba2b68f6c41e7578eeef81" exitCode=0 Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.517149 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cqm4t" podStartSLOduration=188.517134562 podStartE2EDuration="3m8.517134562s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.491820697 +0000 UTC m=+247.255256667" watchObservedRunningTime="2026-03-10 18:51:43.517134562 +0000 UTC m=+247.280570522" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.517179 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" event={"ID":"a8e67548-adc0-4c5c-8115-efff12bad9ae","Type":"ContainerDied","Data":"408c1d87547ceb8425b3d51908d4dcfec3c10eb40aba2b68f6c41e7578eeef81"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.518670 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9pckl" podStartSLOduration=188.518664597 podStartE2EDuration="3m8.518664597s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.518263601 +0000 UTC m=+247.281699561" watchObservedRunningTime="2026-03-10 18:51:43.518664597 +0000 UTC m=+247.282100557" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.520760 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2rvxn"] Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.567122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" event={"ID":"956b8171-6837-47bf-9f44-7f9cc43a1e30","Type":"ContainerStarted","Data":"26ebf3a3b9bc177d862260efc6c7fe8eeb0e526d40837e74236c81709bc35105"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.578314 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.581108 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.081085737 +0000 UTC m=+247.844521697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.649654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" event={"ID":"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1","Type":"ContainerStarted","Data":"cad61d5c94e5323c6671996689b2466a55e52ad4593d7d1d5648864b5dbea979"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.673910 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" event={"ID":"754666ad-657a-4e98-8feb-47f74d33dc3c","Type":"ContainerStarted","Data":"51cc074c19c5b122d40b8cb17e11a3dfc339aae4f9823b2e42811a529020849c"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.681005 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.691311 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.191296841 +0000 UTC m=+247.954732791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.702242 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kvtz7" event={"ID":"f8c89f84-e0e2-4259-8245-1410699e2b6c","Type":"ContainerStarted","Data":"6d3aefb36da93d82ebc020e68788b32e553251915cbe0e7a7038b8715ed57452"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.703659 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" event={"ID":"8835b95e-842f-4c2f-ba78-970b2ef6749c","Type":"ContainerStarted","Data":"189c9055fa18d7883d5a60a457fddc00059091a077e22361c08f86f2c90934f1"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.705047 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jbmn2" event={"ID":"6922c8a2-7528-42fa-8df7-963b652296f8","Type":"ContainerStarted","Data":"6a82b1212868fef34d6ca63b768c824a2c65c68e3d82c9dde8b3ca6e4d9a1896"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.708827 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gwxrl" podStartSLOduration=187.708812653 podStartE2EDuration="3m7.708812653s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.650423987 +0000 UTC m=+247.413859957" watchObservedRunningTime="2026-03-10 18:51:43.708812653 +0000 UTC m=+247.472248613" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.718517 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-cffh2" podStartSLOduration=188.718499977 podStartE2EDuration="3m8.718499977s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.714058457 +0000 UTC m=+247.477494427" watchObservedRunningTime="2026-03-10 18:51:43.718499977 +0000 UTC m=+247.481935937" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.724965 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" event={"ID":"81f400f6-9912-4f4a-9d98-877c303a60ec","Type":"ContainerStarted","Data":"3bda228eccf9fa154629d6d8efa76f8c69f5eb99c3db6b782bf6deeb4967f3fb"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.730088 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" event={"ID":"f01993c1-5b0e-4dd2-a98d-8685e13b474c","Type":"ContainerStarted","Data":"430ec32c11de98a9dca186f57488b4ad6da1aac22e2a31633f2d3b9a1c3f30d7"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.765531 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" event={"ID":"d4f453f4-2d5c-408c-8a19-74d979cd78c8","Type":"ContainerStarted","Data":"3619b68446e2532ca14afb601a988c14263ace1ec96f97525a865974a5a631d6"} Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.782446 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.804174 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.804361 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.304337962 +0000 UTC m=+248.067773922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.804766 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.806795 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.306784542 +0000 UTC m=+248.070220492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.823268 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" podStartSLOduration=187.823222064 podStartE2EDuration="3m7.823222064s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.822660045 +0000 UTC m=+247.586096005" watchObservedRunningTime="2026-03-10 18:51:43.823222064 +0000 UTC m=+247.586658024" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.855331 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2fnd" podStartSLOduration=188.855302368 podStartE2EDuration="3m8.855302368s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.854577227 +0000 UTC m=+247.618013187" watchObservedRunningTime="2026-03-10 18:51:43.855302368 +0000 UTC m=+247.618738328" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.904464 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.908307 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:43 crc kubenswrapper[4861]: E0310 18:51:43.908590 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.408573401 +0000 UTC m=+248.172009361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:43 crc kubenswrapper[4861]: I0310 18:51:43.967562 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dwxk9" podStartSLOduration=187.967535265 podStartE2EDuration="3m7.967535265s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:43.925607684 +0000 UTC m=+247.689043654" watchObservedRunningTime="2026-03-10 18:51:43.967535265 +0000 UTC m=+247.730971225" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.017577 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" podStartSLOduration=189.017562567 podStartE2EDuration="3m9.017562567s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:44.016596112 +0000 UTC m=+247.780032082" watchObservedRunningTime="2026-03-10 18:51:44.017562567 +0000 UTC m=+247.780998527" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.017824 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.018307 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.518292668 +0000 UTC m=+248.281728628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.042358 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jbmn2" podStartSLOduration=6.042341434 podStartE2EDuration="6.042341434s" podCreationTimestamp="2026-03-10 18:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:44.041582521 +0000 UTC m=+247.805018481" watchObservedRunningTime="2026-03-10 18:51:44.042341434 +0000 UTC m=+247.805777394" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.121005 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.121386 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.621372799 +0000 UTC m=+248.384808759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.221822 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.222096 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.722084722 +0000 UTC m=+248.485520682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.322783 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.323723 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.823694749 +0000 UTC m=+248.587130709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.424680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.425022 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:44.925011651 +0000 UTC m=+248.688447611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.448181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gsv8s" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.526766 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.527118 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.027097957 +0000 UTC m=+248.790533917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.627691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.627984 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.127973002 +0000 UTC m=+248.891408962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.729335 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.730136 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.230120838 +0000 UTC m=+248.993556798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.787930 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" event={"ID":"562c8783-06a5-4205-ab38-9aabc25c1033","Type":"ContainerStarted","Data":"d40d41726057cf025187712d89414ec57055695beaa81c6e2045206f208a2283"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.788793 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" event={"ID":"e835c42f-7b8a-45a3-a153-ffab9b5386e0","Type":"ContainerStarted","Data":"6aae750e0150333454cf474b9e403fcc4486e4c2e2f70baaf0056b3394e82707"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.789758 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jbmn2" event={"ID":"6922c8a2-7528-42fa-8df7-963b652296f8","Type":"ContainerStarted","Data":"9bce0d4d61bc1056171fa8fedb2fce087386b32d55a114d6339b38c884757e2a"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.794881 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" event={"ID":"d5401d63-54cf-4958-a402-125bbf5793f1","Type":"ContainerStarted","Data":"301c0108e39728a5b530ba58ee866561e1f57ea305852e3003ee4d37a60d0e08"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.794914 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" event={"ID":"d5401d63-54cf-4958-a402-125bbf5793f1","Type":"ContainerStarted","Data":"d0798c9351ade64404ceb293660850aa9971078dc508026f2272c090f7734b23"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.805151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-md6zd" event={"ID":"f7d1fb37-29a9-4548-9054-daa43301c56d","Type":"ContainerStarted","Data":"1f1c66e25108633efc2fc72dfbb97d5c490c306c46f1bd54295766d11ad70104"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.833829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.835353 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.335339583 +0000 UTC m=+249.098775543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.845875 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-md6zd" podStartSLOduration=6.845858062 podStartE2EDuration="6.845858062s" podCreationTimestamp="2026-03-10 18:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:44.840209231 +0000 UTC m=+248.603645201" watchObservedRunningTime="2026-03-10 18:51:44.845858062 +0000 UTC m=+248.609294022" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.870891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" event={"ID":"c06e51d0-e817-41ac-9d69-3ef2099f8ba8","Type":"ContainerStarted","Data":"3b7cab8036f537c86c219915b8dff5fee4228bce043c5334c3f13b4bdba91dec"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.911429 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" event={"ID":"0e1e95be-62b6-4ab4-b526-f9482e74ed23","Type":"ContainerStarted","Data":"1860d75b9974d334eb1ba63d8f796921bfa89819d1b4fc5f502d10c40e7d4efa"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.911773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" event={"ID":"0e1e95be-62b6-4ab4-b526-f9482e74ed23","Type":"ContainerStarted","Data":"abd37d8d64751664c41fa8b2db706098cb9816f85873eadc2f96835606ca6af3"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.924153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerStarted","Data":"b54b2d8fbba5bcb4a72f7ece97d10d5a9038a3a0283e5679a5f7a4ab8c015b55"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.924192 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerStarted","Data":"d6f7cac456be6dc86f1e9f71fa76263c16ddab968655d20bc54637796fee9de1"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.925021 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.932507 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7qnc2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.932556 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.934935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:44 crc kubenswrapper[4861]: E0310 18:51:44.937305 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.437287075 +0000 UTC m=+249.200723035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.952550 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qm6j4" podStartSLOduration=189.95253251 podStartE2EDuration="3m9.95253251s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:44.950759661 +0000 UTC m=+248.714195631" watchObservedRunningTime="2026-03-10 18:51:44.95253251 +0000 UTC m=+248.715968470" Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.955801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" event={"ID":"955cac3b-f6ea-4573-adc7-5271ead0cf37","Type":"ContainerStarted","Data":"4fafd495f86939c06b8a5d129ba0bcaa4303cdefa7440029f8290fda7e718140"} Mar 10 18:51:44 crc kubenswrapper[4861]: I0310 18:51:44.992297 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" event={"ID":"78f966e8-d4fe-4a97-a04e-9bf8b1b62d6c","Type":"ContainerStarted","Data":"472b8d182f5ec252894a31e9e830c8f909f565e9475e73b5dc3d644753e2f539"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.007497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" event={"ID":"07ef9fc2-e757-48ca-b8aa-c97224434044","Type":"ContainerStarted","Data":"33994e451dbbec4fa614c9a1f4744df103a203e4bf8069ee204844d69650e518"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.007536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" event={"ID":"07ef9fc2-e757-48ca-b8aa-c97224434044","Type":"ContainerStarted","Data":"c62f4dc876088199119c4cec72340c0a88890ada98cf6ff50a202b95106eef3d"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.008362 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.011016 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zg9g7" event={"ID":"3db8cb04-007c-48f9-a986-7a503ca1c077","Type":"ContainerStarted","Data":"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.019327 4861 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9khgp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.019381 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" podUID="07ef9fc2-e757-48ca-b8aa-c97224434044" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.019384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kvtz7" event={"ID":"f8c89f84-e0e2-4259-8245-1410699e2b6c","Type":"ContainerStarted","Data":"63b6b1bfccd15861faf33df537d7e29950754077abbd65b6b7bd596ad432a97b"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.020901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" event={"ID":"f01993c1-5b0e-4dd2-a98d-8685e13b474c","Type":"ContainerStarted","Data":"7a424ae4bf94525dbd0d71fbfeab8f1b9471d370921c8917766adab6a3f229dc"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.039091 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.042634 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.542621752 +0000 UTC m=+249.306057712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.065762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" event={"ID":"8835b95e-842f-4c2f-ba78-970b2ef6749c","Type":"ContainerStarted","Data":"2a7fdc46430f17c9281fed2e0c35bf133c6df2ad05e4f8e5c5e276e8b1177028"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.071215 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" podStartSLOduration=189.07119897 podStartE2EDuration="3m9.07119897s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.012974937 +0000 UTC m=+248.776410907" watchObservedRunningTime="2026-03-10 18:51:45.07119897 +0000 UTC m=+248.834634930" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.101382 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rqlh4" podStartSLOduration=190.101367053 podStartE2EDuration="3m10.101367053s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.100691962 +0000 UTC m=+248.864127922" watchObservedRunningTime="2026-03-10 18:51:45.101367053 +0000 UTC m=+248.864803013" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.101543 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" podStartSLOduration=189.101537856 podStartE2EDuration="3m9.101537856s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.074046685 +0000 UTC m=+248.837482655" watchObservedRunningTime="2026-03-10 18:51:45.101537856 +0000 UTC m=+248.864973816" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.111709 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" event={"ID":"48290e46-c619-4673-86b7-bf96f136d693","Type":"ContainerStarted","Data":"47a0b379de1594acf77d8eaa2bebf1f16c5695a8bc654943e45698cc3bb20bf0"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.125671 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" event={"ID":"32e38d22-7f4c-4951-8eb2-befefca67916","Type":"ContainerStarted","Data":"785ae03fda7a3b02a873aaec8bf3ab80350d460fb55cb040d448071d6d9f5dc6"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.127510 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" event={"ID":"f6f89a90-023b-4b69-8848-702a50ab522c","Type":"ContainerStarted","Data":"bf037e67fa93f88e012e8aafb90551f1d88f9fb4d3499f65844efeb64c7b8c12"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.144287 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.146800 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.64677994 +0000 UTC m=+249.410215900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.167110 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" event={"ID":"b0b1942c-6cca-4d7f-8567-e5d340ee265f","Type":"ContainerStarted","Data":"aa23c7280958fb991cf3268c60f0768d86fbb0e3f24216fe670d8a53421899ee"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.167158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" event={"ID":"b0b1942c-6cca-4d7f-8567-e5d340ee265f","Type":"ContainerStarted","Data":"40a7599767cff7a005af8fde310542903028745be0a030ee970f28ebeb86846b"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.168049 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.178271 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zg9g7" podStartSLOduration=190.178254844 podStartE2EDuration="3m10.178254844s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.137872748 +0000 UTC m=+248.901308728" watchObservedRunningTime="2026-03-10 18:51:45.178254844 +0000 UTC m=+248.941690804" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.192823 4861 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2m858 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.192863 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" podUID="b0b1942c-6cca-4d7f-8567-e5d340ee265f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.204480 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" event={"ID":"d4f453f4-2d5c-408c-8a19-74d979cd78c8","Type":"ContainerStarted","Data":"651f959cb33fb90a31b1c446a2fefa00702b60aea946e307c57b43724071f7b0"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.219947 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4s2vr" podStartSLOduration=190.219933922 podStartE2EDuration="3m10.219933922s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.180058393 +0000 UTC m=+248.943494353" watchObservedRunningTime="2026-03-10 18:51:45.219933922 +0000 UTC m=+248.983369882" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.220363 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" podStartSLOduration=189.220360188 podStartE2EDuration="3m9.220360188s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.218697852 +0000 UTC m=+248.982133822" watchObservedRunningTime="2026-03-10 18:51:45.220360188 +0000 UTC m=+248.983796138" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.222834 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552810-wv88x" event={"ID":"19e950a0-6f71-44af-b995-f1ef1be6edbb","Type":"ContainerStarted","Data":"ec5addcc268a61d9898ac2cce9e91769a58f85b2df0ec1c03a37b1c135ef45b8"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.234412 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" event={"ID":"ea181c37-7166-4db5-92b4-9321f06c0323","Type":"ContainerStarted","Data":"a2ca397da37c65a7ff4f1834b5cb07ca53965f474283911af57fd4b55a2053e2"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.249249 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.251126 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.751114511 +0000 UTC m=+249.514550471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.269518 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" podStartSLOduration=189.269497596 podStartE2EDuration="3m9.269497596s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.251648999 +0000 UTC m=+249.015084959" watchObservedRunningTime="2026-03-10 18:51:45.269497596 +0000 UTC m=+249.032933556" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.275188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" event={"ID":"f5cbf703-ecdc-42d3-b313-f69ec71399fa","Type":"ContainerStarted","Data":"078fec206d3830326840e432c7eb382aa869188fb8f89cd42eac15be85422d11"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.284904 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" podStartSLOduration=190.284888522 podStartE2EDuration="3m10.284888522s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.282604945 +0000 UTC m=+249.046040905" watchObservedRunningTime="2026-03-10 18:51:45.284888522 +0000 UTC m=+249.048324482" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.294112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" event={"ID":"191bb018-6a2d-44a5-adf6-121673d85eb7","Type":"ContainerStarted","Data":"d200a4695269f5e8a48bde196fb3cfd420ae9bbcbdccd87feb74711bc0a2e64e"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.295093 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.304002 4861 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nmc5h container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.304497 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" podUID="191bb018-6a2d-44a5-adf6-121673d85eb7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.310540 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" event={"ID":"d82c061b-6eea-49c1-8017-c401d3bd0f58","Type":"ContainerStarted","Data":"95b105effb1875738813af2ead21ad6dc92261f778347de5ea65d47f38cf3739"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.310586 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" event={"ID":"d82c061b-6eea-49c1-8017-c401d3bd0f58","Type":"ContainerStarted","Data":"06e46852b57a5e6aab05f758b30f804347bea57e5f1079d9ef26bff31c22042e"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.315808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" event={"ID":"592a6d7a-6cf4-4a23-bc7d-2444b6387faa","Type":"ContainerStarted","Data":"1f00f2345c963cba2aa5d56f1424afc4c8eea9d3372e5b2407c3f056c7f5607f"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.315860 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" event={"ID":"592a6d7a-6cf4-4a23-bc7d-2444b6387faa","Type":"ContainerStarted","Data":"cd87c3f4b548af2b24f807a51531c5154031bb3310f011ab13992037f3ca0668"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.316487 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.325493 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t52tz" podStartSLOduration=189.325477872 podStartE2EDuration="3m9.325477872s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.323178885 +0000 UTC m=+249.086614845" watchObservedRunningTime="2026-03-10 18:51:45.325477872 +0000 UTC m=+249.088913822" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.355787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" event={"ID":"a8e67548-adc0-4c5c-8115-efff12bad9ae","Type":"ContainerStarted","Data":"92b57afa7ec6171aa855f2468ad770f755c794746d468ecebac217edd452c77e"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.356312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.356379 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.356663 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.856648971 +0000 UTC m=+249.620084931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.366181 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-525cw" podStartSLOduration=190.366166433 podStartE2EDuration="3m10.366166433s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.356462048 +0000 UTC m=+249.119898008" watchObservedRunningTime="2026-03-10 18:51:45.366166433 +0000 UTC m=+249.129602393" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.402344 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" podStartSLOduration=190.402325113 podStartE2EDuration="3m10.402325113s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.401963467 +0000 UTC m=+249.165399447" watchObservedRunningTime="2026-03-10 18:51:45.402325113 +0000 UTC m=+249.165761073" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.420611 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qsq94" event={"ID":"044ff649-6d8e-4b0b-bfaa-8018e00e105d","Type":"ContainerStarted","Data":"519b8d9f4647e23bdf81073de43c386f35f40a61161af9c5a41ff9cced7d2e82"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.421490 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.446974 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-qsq94 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.447033 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qsq94" podUID="044ff649-6d8e-4b0b-bfaa-8018e00e105d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.449318 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9dcxh" podStartSLOduration=190.449306795 podStartE2EDuration="3m10.449306795s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.442259822 +0000 UTC m=+249.205695792" watchObservedRunningTime="2026-03-10 18:51:45.449306795 +0000 UTC m=+249.212742755" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.458020 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.459690 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:45.959679612 +0000 UTC m=+249.723115572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.480073 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" event={"ID":"754666ad-657a-4e98-8feb-47f74d33dc3c","Type":"ContainerStarted","Data":"044a5dc415fa9f6214e9a7f4e081eeb2fdbc2bb55766d6ce0152f62f9f798d1c"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.492845 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" podStartSLOduration=189.492829702 podStartE2EDuration="3m9.492829702s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.482803112 +0000 UTC m=+249.246239082" watchObservedRunningTime="2026-03-10 18:51:45.492829702 +0000 UTC m=+249.256265662" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.494518 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" event={"ID":"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1","Type":"ContainerStarted","Data":"9b8cbb2fb30116187feaaf4039a8a4cedada8b6d756b8aa9b76579b0f9fd1165"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.519083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rzhxp" event={"ID":"cf512bc0-beb2-4784-ac61-7f7a22ccc3e9","Type":"ContainerStarted","Data":"0148dff305706f127b6232dce4ee6596460fbe2fb7718614c7e934460b44e428"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.522274 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" event={"ID":"ed642e8b-2ae6-4db9-a035-0a568c32c47b","Type":"ContainerStarted","Data":"0640d6aaced3487d14025199b8c73c16dca7a1288935c044a1706c26ba8bf275"} Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.551673 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" podStartSLOduration=190.551654904 podStartE2EDuration="3m10.551654904s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.530031198 +0000 UTC m=+249.293467178" watchObservedRunningTime="2026-03-10 18:51:45.551654904 +0000 UTC m=+249.315090864" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.560188 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.563943 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.06390008 +0000 UTC m=+249.827336040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.613894 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" podStartSLOduration=189.613881221 podStartE2EDuration="3m9.613881221s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.589112084 +0000 UTC m=+249.352548054" watchObservedRunningTime="2026-03-10 18:51:45.613881221 +0000 UTC m=+249.377317181" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.615102 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jbgkh" podStartSLOduration=189.61509679 podStartE2EDuration="3m9.61509679s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.613073857 +0000 UTC m=+249.376509817" watchObservedRunningTime="2026-03-10 18:51:45.61509679 +0000 UTC m=+249.378532750" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.623089 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.623985 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.626325 4861 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4d9gw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.626360 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" podUID="f5cbf703-ecdc-42d3-b313-f69ec71399fa" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.662501 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.666195 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.166179888 +0000 UTC m=+249.929615838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.697678 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rzhxp" podStartSLOduration=190.697662633 podStartE2EDuration="3m10.697662633s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.696157828 +0000 UTC m=+249.459593788" watchObservedRunningTime="2026-03-10 18:51:45.697662633 +0000 UTC m=+249.461098593" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.698218 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" podStartSLOduration=190.698214731 podStartE2EDuration="3m10.698214731s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.652534729 +0000 UTC m=+249.415970699" watchObservedRunningTime="2026-03-10 18:51:45.698214731 +0000 UTC m=+249.461650691" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.761819 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.770326 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.770617 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.770891 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.270877145 +0000 UTC m=+250.034313105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.802009 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qsq94" podStartSLOduration=190.801990973 podStartE2EDuration="3m10.801990973s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.783978024 +0000 UTC m=+249.547413994" watchObservedRunningTime="2026-03-10 18:51:45.801990973 +0000 UTC m=+249.565426933" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.803511 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mb9ln" podStartSLOduration=190.803503697 podStartE2EDuration="3m10.803503697s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.738898482 +0000 UTC m=+249.502334442" watchObservedRunningTime="2026-03-10 18:51:45.803503697 +0000 UTC m=+249.566939657" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.823748 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.828155 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:45 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:45 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:45 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.828203 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.871634 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.871990 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.371978714 +0000 UTC m=+250.135414674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.872236 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.877463 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b4s99" podStartSLOduration=190.877449172 podStartE2EDuration="3m10.877449172s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.875783695 +0000 UTC m=+249.639219655" watchObservedRunningTime="2026-03-10 18:51:45.877449172 +0000 UTC m=+249.640885132" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.953507 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" podStartSLOduration=190.953489989 podStartE2EDuration="3m10.953489989s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:45.947107687 +0000 UTC m=+249.710543647" watchObservedRunningTime="2026-03-10 18:51:45.953489989 +0000 UTC m=+249.716925949" Mar 10 18:51:45 crc kubenswrapper[4861]: I0310 18:51:45.972274 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:45 crc kubenswrapper[4861]: E0310 18:51:45.972547 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.472530814 +0000 UTC m=+250.235966774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.075520 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.076135 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.576123613 +0000 UTC m=+250.339559573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.176848 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.177053 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.677028399 +0000 UTC m=+250.440464359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.177097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.177417 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.677410265 +0000 UTC m=+250.440846225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.277793 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.278064 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.778040467 +0000 UTC m=+250.541476427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.378636 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.378925 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.878913052 +0000 UTC m=+250.642349012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.457062 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.479096 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.479232 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.979208029 +0000 UTC m=+250.742643989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.479278 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.479540 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:46.979529014 +0000 UTC m=+250.742964974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.531726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" event={"ID":"562c8783-06a5-4205-ab38-9aabc25c1033","Type":"ContainerStarted","Data":"891b721aac30f9d26d62114a21e835fb99d4ec4ce0bf3e2ee3752de219131b7c"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.540178 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kvtz7" event={"ID":"f8c89f84-e0e2-4259-8245-1410699e2b6c","Type":"ContainerStarted","Data":"637b2c697cec86fa9b5180043aaadfa0842ea205f715c366015083b1ca824cec"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.540398 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.555377 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x2llm" podStartSLOduration=191.555364388 podStartE2EDuration="3m11.555364388s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.552754836 +0000 UTC m=+250.316190806" watchObservedRunningTime="2026-03-10 18:51:46.555364388 +0000 UTC m=+250.318800348" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.558853 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8wcvl" event={"ID":"f01993c1-5b0e-4dd2-a98d-8685e13b474c","Type":"ContainerStarted","Data":"58374b3435da506d8b04de4555a4888331461f5d7a564c9d0b510e95600c7caf"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.562961 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" event={"ID":"d5401d63-54cf-4958-a402-125bbf5793f1","Type":"ContainerStarted","Data":"a5cfa584ec955fac41c6a96fa850824113735283fc60d4281f03ebfd7b31b989"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.579790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.580111 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.080096974 +0000 UTC m=+250.843532934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.593767 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" event={"ID":"f5cbf703-ecdc-42d3-b313-f69ec71399fa","Type":"ContainerStarted","Data":"6b7a2aaf92059b7b009ca118e8c899c651aa42c719171cd179a552caae8403d8"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.600901 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kvtz7" podStartSLOduration=8.600868847 podStartE2EDuration="8.600868847s" podCreationTimestamp="2026-03-10 18:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.579326841 +0000 UTC m=+250.342762821" watchObservedRunningTime="2026-03-10 18:51:46.600868847 +0000 UTC m=+250.364304807" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.601416 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9g768" podStartSLOduration=191.601410555 podStartE2EDuration="3m11.601410555s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.596955824 +0000 UTC m=+250.360391784" watchObservedRunningTime="2026-03-10 18:51:46.601410555 +0000 UTC m=+250.364846515" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.629919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zgxhw" event={"ID":"d82c061b-6eea-49c1-8017-c401d3bd0f58","Type":"ContainerStarted","Data":"e141ade41fd35398f010c614dfdfd25ca3b37385b0e4ddcaedd05e14a0c63819"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.642263 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60434: no serving certificate available for the kubelet" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.652687 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" event={"ID":"b6781d90-ea76-44b0-b2eb-44641332f632","Type":"ContainerStarted","Data":"43bb143579ac4eb7f05dd6f2de11173220d55a4297df9142ad7fe0c5d2eca769"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.660835 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" event={"ID":"e835c42f-7b8a-45a3-a153-ffab9b5386e0","Type":"ContainerStarted","Data":"61ca3e1060d6dfb28752f8138422c83504a8608695be1459552183b0e5aa08e4"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.666642 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdswb" event={"ID":"7660cda2-b1cc-43a9-b2d2-8e39f7b2e5b1","Type":"ContainerStarted","Data":"b8b6711938f8fc5f09705896003ec2b953a9cd7bf78ccfddd77f6c11d058bba6"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.676040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" event={"ID":"c06e51d0-e817-41ac-9d69-3ef2099f8ba8","Type":"ContainerStarted","Data":"d34886d03e581dc9ed9e035f80019f484253bf8b36ea3bd0c19e765db3d5d0c1"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.676275 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2rvxn" event={"ID":"c06e51d0-e817-41ac-9d69-3ef2099f8ba8","Type":"ContainerStarted","Data":"448ae070f68a1c585e8469aa6f596123a5de731fd7fb2d8794ff901b367c33aa"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.681747 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.681856 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" podStartSLOduration=191.681840273 podStartE2EDuration="3m11.681840273s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.680896679 +0000 UTC m=+250.444332649" watchObservedRunningTime="2026-03-10 18:51:46.681840273 +0000 UTC m=+250.445276233" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.683587 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.183574571 +0000 UTC m=+250.947010531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.689682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" event={"ID":"592a6d7a-6cf4-4a23-bc7d-2444b6387faa","Type":"ContainerStarted","Data":"b5cb758537dba6e85888f21631412b1d9249028f0df16356fea47dcdeef571e1"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.693842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" event={"ID":"f6f89a90-023b-4b69-8848-702a50ab522c","Type":"ContainerStarted","Data":"5b536b06cc22c5a4104792d4179f23fa63dd2c52155ea03603650fa02c356765"} Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.694996 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7qnc2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.695130 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.695594 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-qsq94 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.695633 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qsq94" podUID="044ff649-6d8e-4b0b-bfaa-8018e00e105d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.714400 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9khgp" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.714444 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2rvxn" podStartSLOduration=191.714422116 podStartE2EDuration="3m11.714422116s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.706524659 +0000 UTC m=+250.469960619" watchObservedRunningTime="2026-03-10 18:51:46.714422116 +0000 UTC m=+250.477858076" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.717854 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2m858" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.721063 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5qqcg" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.745553 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-456qk" podStartSLOduration=190.745537224 podStartE2EDuration="3m10.745537224s" podCreationTimestamp="2026-03-10 18:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:46.743061264 +0000 UTC m=+250.506497224" watchObservedRunningTime="2026-03-10 18:51:46.745537224 +0000 UTC m=+250.508973184" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.779056 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60442: no serving certificate available for the kubelet" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.782902 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.783825 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.283811356 +0000 UTC m=+251.047247316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.827606 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:46 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:46 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:46 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.827965 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.888994 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.896250 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.396235126 +0000 UTC m=+251.159671176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.922246 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60452: no serving certificate available for the kubelet" Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.990668 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:46 crc kubenswrapper[4861]: E0310 18:51:46.991224 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.491209667 +0000 UTC m=+251.254645627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:46 crc kubenswrapper[4861]: I0310 18:51:46.999023 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmc5h" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.018717 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60454: no serving certificate available for the kubelet" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.091910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.093247 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.593232522 +0000 UTC m=+251.356668482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.123688 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60462: no serving certificate available for the kubelet" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.192829 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.193244 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.693228612 +0000 UTC m=+251.456664562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.213910 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60468: no serving certificate available for the kubelet" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.294604 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.295071 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.795054064 +0000 UTC m=+251.558490024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.318660 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60480: no serving certificate available for the kubelet" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.321881 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.347926 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.397171 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.397486 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.897472183 +0000 UTC m=+251.660908133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.455359 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60484: no serving certificate available for the kubelet" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.497991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.498307 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:47.998296198 +0000 UTC m=+251.761732158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.598551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.598920 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.098897549 +0000 UTC m=+251.862333509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.661974 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.662906 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.667036 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.670645 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jbxs4" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.675404 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.699491 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.699545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.699593 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.699635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghk9\" (UniqueName: \"kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.699905 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.199893886 +0000 UTC m=+251.963329846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.702039 4861 generic.go:334] "Generic (PLEG): container finished" podID="e835c42f-7b8a-45a3-a153-ffab9b5386e0" containerID="61ca3e1060d6dfb28752f8138422c83504a8608695be1459552183b0e5aa08e4" exitCode=0 Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.702100 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" event={"ID":"e835c42f-7b8a-45a3-a153-ffab9b5386e0","Type":"ContainerDied","Data":"61ca3e1060d6dfb28752f8138422c83504a8608695be1459552183b0e5aa08e4"} Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.708674 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" event={"ID":"b6781d90-ea76-44b0-b2eb-44641332f632","Type":"ContainerStarted","Data":"f2289fc09f7c2e75b4c620d83bc6751e34cbcf31577f88487ecac5ba41b82e46"} Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.709563 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerName="route-controller-manager" containerID="cri-o://c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7" gracePeriod=30 Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.709792 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-qsq94 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.709819 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qsq94" podUID="044ff649-6d8e-4b0b-bfaa-8018e00e105d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.709911 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7qnc2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.709966 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.710993 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerName="controller-manager" containerID="cri-o://086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e" gracePeriod=30 Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.756606 4861 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.805149 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.805222 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.305206963 +0000 UTC m=+252.068642923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.807296 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.807447 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghk9\" (UniqueName: \"kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.807757 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.807894 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.811415 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.311400502 +0000 UTC m=+252.074836462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.816417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.820545 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.826073 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:47 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:47 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:47 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.826123 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.846341 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghk9\" (UniqueName: \"kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9\") pod \"community-operators-pmtqc\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.855651 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.856580 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.864009 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.873279 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.908829 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.908989 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.909031 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:47 crc kubenswrapper[4861]: I0310 18:51:47.909048 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fckjm\" (UniqueName: \"kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:47 crc kubenswrapper[4861]: E0310 18:51:47.909156 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.409140488 +0000 UTC m=+252.172576448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.010424 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.010463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.010491 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.010509 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fckjm\" (UniqueName: \"kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.011186 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: E0310 18:51:48.011404 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.511394596 +0000 UTC m=+252.274830556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.011730 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.034783 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fckjm\" (UniqueName: \"kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm\") pod \"certified-operators-hw926\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.038010 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.065421 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.067957 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.074036 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.111115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:48 crc kubenswrapper[4861]: E0310 18:51:48.111647 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.61163036 +0000 UTC m=+252.375066310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.191396 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60496: no serving certificate available for the kubelet" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.199761 4861 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-10T18:51:47.756861039Z","Handler":null,"Name":""} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.216147 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.216180 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.216228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjx5\" (UniqueName: \"kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.216272 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: E0310 18:51:48.216737 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-10 18:51:48.716726864 +0000 UTC m=+252.480162824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j4q94" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.226880 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.249882 4861 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.249920 4861 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.272628 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.273592 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.278199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.299929 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.316922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.317129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.317150 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.317196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjx5\" (UniqueName: \"kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.317751 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.317962 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.322606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.348906 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.351710 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjx5\" (UniqueName: \"kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5\") pod \"community-operators-sj857\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.418373 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca\") pod \"9896da3e-4505-4ced-b1e7-cd47a951971e\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.418882 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert\") pod \"9896da3e-4505-4ced-b1e7-cd47a951971e\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.418927 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwdtx\" (UniqueName: \"kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx\") pod \"9896da3e-4505-4ced-b1e7-cd47a951971e\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419024 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config\") pod \"9896da3e-4505-4ced-b1e7-cd47a951971e\" (UID: \"9896da3e-4505-4ced-b1e7-cd47a951971e\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419237 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67kt\" (UniqueName: \"kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.419653 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9896da3e-4505-4ced-b1e7-cd47a951971e" (UID: "9896da3e-4505-4ced-b1e7-cd47a951971e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.421055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config" (OuterVolumeSpecName: "config") pod "9896da3e-4505-4ced-b1e7-cd47a951971e" (UID: "9896da3e-4505-4ced-b1e7-cd47a951971e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.423214 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.423247 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.426097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx" (OuterVolumeSpecName: "kube-api-access-cwdtx") pod "9896da3e-4505-4ced-b1e7-cd47a951971e" (UID: "9896da3e-4505-4ced-b1e7-cd47a951971e"). InnerVolumeSpecName "kube-api-access-cwdtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.426954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9896da3e-4505-4ced-b1e7-cd47a951971e" (UID: "9896da3e-4505-4ced-b1e7-cd47a951971e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.432672 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.437983 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.451407 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j4q94\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520256 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles\") pod \"2b5b4c87-f6e3-4523-837d-2f45ad711489\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520384 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca\") pod \"2b5b4c87-f6e3-4523-837d-2f45ad711489\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520411 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config\") pod \"2b5b4c87-f6e3-4523-837d-2f45ad711489\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520431 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgvkk\" (UniqueName: \"kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk\") pod \"2b5b4c87-f6e3-4523-837d-2f45ad711489\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520456 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert\") pod \"2b5b4c87-f6e3-4523-837d-2f45ad711489\" (UID: \"2b5b4c87-f6e3-4523-837d-2f45ad711489\") " Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520643 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520676 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520770 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67kt\" (UniqueName: \"kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520858 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520869 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9896da3e-4505-4ced-b1e7-cd47a951971e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520881 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9896da3e-4505-4ced-b1e7-cd47a951971e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.520890 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwdtx\" (UniqueName: \"kubernetes.io/projected/9896da3e-4505-4ced-b1e7-cd47a951971e-kube-api-access-cwdtx\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.522254 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b5b4c87-f6e3-4523-837d-2f45ad711489" (UID: "2b5b4c87-f6e3-4523-837d-2f45ad711489"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.522687 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b5b4c87-f6e3-4523-837d-2f45ad711489" (UID: "2b5b4c87-f6e3-4523-837d-2f45ad711489"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.523592 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config" (OuterVolumeSpecName: "config") pod "2b5b4c87-f6e3-4523-837d-2f45ad711489" (UID: "2b5b4c87-f6e3-4523-837d-2f45ad711489"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.524319 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.527570 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk" (OuterVolumeSpecName: "kube-api-access-pgvkk") pod "2b5b4c87-f6e3-4523-837d-2f45ad711489" (UID: "2b5b4c87-f6e3-4523-837d-2f45ad711489"). InnerVolumeSpecName "kube-api-access-pgvkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.527898 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b5b4c87-f6e3-4523-837d-2f45ad711489" (UID: "2b5b4c87-f6e3-4523-837d-2f45ad711489"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.541966 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67kt\" (UniqueName: \"kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.549170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content\") pod \"certified-operators-65ndj\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.552605 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:51:48 crc kubenswrapper[4861]: W0310 18:51:48.564471 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76065df_dec9_4b14_bd49_8e2d134bf53f.slice/crio-7d9395a664bc150c221961fb873e407c7b5b8d7dfd3535666c231d78ea285070 WatchSource:0}: Error finding container 7d9395a664bc150c221961fb873e407c7b5b8d7dfd3535666c231d78ea285070: Status 404 returned error can't find the container with id 7d9395a664bc150c221961fb873e407c7b5b8d7dfd3535666c231d78ea285070 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.630162 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5b4c87-f6e3-4523-837d-2f45ad711489-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.630220 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.630235 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.630244 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5b4c87-f6e3-4523-837d-2f45ad711489-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.630256 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgvkk\" (UniqueName: \"kubernetes.io/projected/2b5b4c87-f6e3-4523-837d-2f45ad711489-kube-api-access-pgvkk\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.639875 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.646479 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.667020 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:48 crc kubenswrapper[4861]: W0310 18:51:48.710426 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a9210c4_9579_4cfc_bf99_b652c3af6915.slice/crio-d4be2944a4c4857e89ac0b3151108ec0a02cd92b6967800ddcfd60509cacf795 WatchSource:0}: Error finding container d4be2944a4c4857e89ac0b3151108ec0a02cd92b6967800ddcfd60509cacf795: Status 404 returned error can't find the container with id d4be2944a4c4857e89ac0b3151108ec0a02cd92b6967800ddcfd60509cacf795 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.718917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" event={"ID":"b6781d90-ea76-44b0-b2eb-44641332f632","Type":"ContainerStarted","Data":"1a73b5d51b58c80507953026e099b2f6b461ba43b1b47dd22ad56ffe053bdd38"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.718954 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" event={"ID":"b6781d90-ea76-44b0-b2eb-44641332f632","Type":"ContainerStarted","Data":"6497f4ea01fe7d1d36d7496fdc7b11fc2417b99f60f6d8c243fd47294e60d909"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.720495 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerStarted","Data":"d4be2944a4c4857e89ac0b3151108ec0a02cd92b6967800ddcfd60509cacf795"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.740683 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerStarted","Data":"7d9395a664bc150c221961fb873e407c7b5b8d7dfd3535666c231d78ea285070"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.743414 4861 generic.go:334] "Generic (PLEG): container finished" podID="edc16f3e-454b-4167-9d26-c50bba23281e" containerID="e61feebc54b69aa9c495d4f493861d28e49d38ce199aefe4ec38781afe5e016b" exitCode=0 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.743460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerDied","Data":"e61feebc54b69aa9c495d4f493861d28e49d38ce199aefe4ec38781afe5e016b"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.743476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerStarted","Data":"52b0fdacaeceadab4c2d4e75b53c01b9be4de7265f92679870598947087ff619"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.744399 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dk9rs" podStartSLOduration=10.744380243 podStartE2EDuration="10.744380243s" podCreationTimestamp="2026-03-10 18:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:48.740477361 +0000 UTC m=+252.503913321" watchObservedRunningTime="2026-03-10 18:51:48.744380243 +0000 UTC m=+252.507816203" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.750702 4861 generic.go:334] "Generic (PLEG): container finished" podID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerID="c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7" exitCode=0 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.750768 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" event={"ID":"9896da3e-4505-4ced-b1e7-cd47a951971e","Type":"ContainerDied","Data":"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.750829 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" event={"ID":"9896da3e-4505-4ced-b1e7-cd47a951971e","Type":"ContainerDied","Data":"73662ea43c248dae4b4a08247d49c89b349e189032968b75f059e9140e2e3df5"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.750850 4861 scope.go:117] "RemoveContainer" containerID="c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.750861 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.753532 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerID="086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e" exitCode=0 Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.753644 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.753754 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" event={"ID":"2b5b4c87-f6e3-4523-837d-2f45ad711489","Type":"ContainerDied","Data":"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.753812 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tpnft" event={"ID":"2b5b4c87-f6e3-4523-837d-2f45ad711489","Type":"ContainerDied","Data":"0ead3673874a61bcc645b41a1aa80e929d586108a8d7736ba15b27807c410131"} Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.809246 4861 scope.go:117] "RemoveContainer" containerID="c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7" Mar 10 18:51:48 crc kubenswrapper[4861]: E0310 18:51:48.809699 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7\": container with ID starting with c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7 not found: ID does not exist" containerID="c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.809766 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7"} err="failed to get container status \"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7\": rpc error: code = NotFound desc = could not find container \"c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7\": container with ID starting with c8782be5a98badef8f8069bb1fd3d7bebd08c7a7ab1a3a07acff205e32c047d7 not found: ID does not exist" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.809796 4861 scope.go:117] "RemoveContainer" containerID="086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.831760 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:48 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:48 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:48 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.831821 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.848589 4861 scope.go:117] "RemoveContainer" containerID="086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e" Mar 10 18:51:48 crc kubenswrapper[4861]: E0310 18:51:48.855224 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e\": container with ID starting with 086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e not found: ID does not exist" containerID="086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.855259 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e"} err="failed to get container status \"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e\": rpc error: code = NotFound desc = could not find container \"086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e\": container with ID starting with 086f6f16f5e2c8a160050db4f713b7308227c8a48a1053906df28411376ec15e not found: ID does not exist" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.864604 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.872125 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.874234 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tpnft"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.884094 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.890119 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qm57d"] Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.947611 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.970496 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" path="/var/lib/kubelet/pods/2b5b4c87-f6e3-4523-837d-2f45ad711489/volumes" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.971104 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 10 18:51:48 crc kubenswrapper[4861]: I0310 18:51:48.971727 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" path="/var/lib/kubelet/pods/9896da3e-4505-4ced-b1e7-cd47a951971e/volumes" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.037638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdjq\" (UniqueName: \"kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq\") pod \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.037762 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume\") pod \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.037804 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume\") pod \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\" (UID: \"e835c42f-7b8a-45a3-a153-ffab9b5386e0\") " Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.038862 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "e835c42f-7b8a-45a3-a153-ffab9b5386e0" (UID: "e835c42f-7b8a-45a3-a153-ffab9b5386e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.047538 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq" (OuterVolumeSpecName: "kube-api-access-rwdjq") pod "e835c42f-7b8a-45a3-a153-ffab9b5386e0" (UID: "e835c42f-7b8a-45a3-a153-ffab9b5386e0"). InnerVolumeSpecName "kube-api-access-rwdjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.049063 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e835c42f-7b8a-45a3-a153-ffab9b5386e0" (UID: "e835c42f-7b8a-45a3-a153-ffab9b5386e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.139268 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdjq\" (UniqueName: \"kubernetes.io/projected/e835c42f-7b8a-45a3-a153-ffab9b5386e0-kube-api-access-rwdjq\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.139572 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e835c42f-7b8a-45a3-a153-ffab9b5386e0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.139581 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e835c42f-7b8a-45a3-a153-ffab9b5386e0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.141521 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.531846 4861 ???:1] "http: TLS handshake error from 192.168.126.11:60498: no serving certificate available for the kubelet" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.620616 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:51:49 crc kubenswrapper[4861]: E0310 18:51:49.620900 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerName="controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.620912 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerName="controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: E0310 18:51:49.620920 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerName="route-controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.620926 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerName="route-controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: E0310 18:51:49.620949 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e835c42f-7b8a-45a3-a153-ffab9b5386e0" containerName="collect-profiles" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.620955 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e835c42f-7b8a-45a3-a153-ffab9b5386e0" containerName="collect-profiles" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.621057 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9896da3e-4505-4ced-b1e7-cd47a951971e" containerName="route-controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.621079 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5b4c87-f6e3-4523-837d-2f45ad711489" containerName="controller-manager" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.621101 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e835c42f-7b8a-45a3-a153-ffab9b5386e0" containerName="collect-profiles" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.621674 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.622062 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.623742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.629766 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.630174 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.630408 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.630565 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.630683 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.630832 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.631485 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.631629 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.631839 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.631928 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.632220 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.646052 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.647872 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.653084 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.656802 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6f7\" (UniqueName: \"kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.656856 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.656885 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.656901 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657006 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657103 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657135 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657168 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnzl\" (UniqueName: \"kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657231 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.657243 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.684339 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.690129 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.691723 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.692015 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758174 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758308 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758342 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnzl\" (UniqueName: \"kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758364 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758414 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wsn\" (UniqueName: \"kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758490 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6f7\" (UniqueName: \"kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.758510 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.759110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.759286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.760544 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.761418 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.762735 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.768050 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerID="0ec1cfa07a2ce0e20a3d2660639fa17daa0414890d653d1c9d2646db439f1440" exitCode=0 Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.768105 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerDied","Data":"0ec1cfa07a2ce0e20a3d2660639fa17daa0414890d653d1c9d2646db439f1440"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.771954 4861 generic.go:334] "Generic (PLEG): container finished" podID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerID="34c81fcffd6f969071efa0454ddfce425949263457655ecc1aa8a344e6f5bebc" exitCode=0 Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.772079 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerDied","Data":"34c81fcffd6f969071efa0454ddfce425949263457655ecc1aa8a344e6f5bebc"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.774158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" event={"ID":"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3","Type":"ContainerStarted","Data":"8960113ac5f9a4f46b83051767e3bdb6f82f18a0507b90a9e2292e59743123e5"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.774203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" event={"ID":"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3","Type":"ContainerStarted","Data":"3dadf17fe8031e3fba8c016174fa055eb09887957f26ac211cd9427d9f49920a"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.774682 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.780898 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.788478 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" event={"ID":"e835c42f-7b8a-45a3-a153-ffab9b5386e0","Type":"ContainerDied","Data":"6aae750e0150333454cf474b9e403fcc4486e4c2e2f70baaf0056b3394e82707"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.788545 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aae750e0150333454cf474b9e403fcc4486e4c2e2f70baaf0056b3394e82707" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.788592 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.789153 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6f7\" (UniqueName: \"kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.789779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnzl\" (UniqueName: \"kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl\") pod \"controller-manager-57895bf995-rsd89\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.792287 4861 generic.go:334] "Generic (PLEG): container finished" podID="832146f2-ed86-4794-a676-13d3df8679ad" containerID="736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8" exitCode=0 Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.793313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerDied","Data":"736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.793341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerStarted","Data":"c6283e834fa11becf9da9709e31afeb7540709bc3aa376835ef17bafa6d6c762"} Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.810610 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert\") pod \"route-controller-manager-5f4b956bb9-4xs89\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.820179 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" podStartSLOduration=194.820158301 podStartE2EDuration="3m14.820158301s" podCreationTimestamp="2026-03-10 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:49.816018895 +0000 UTC m=+253.579454885" watchObservedRunningTime="2026-03-10 18:51:49.820158301 +0000 UTC m=+253.583594261" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.826832 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:49 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:49 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:49 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.826915 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.859954 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.860006 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.860088 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wsn\" (UniqueName: \"kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.861187 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.861604 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.881701 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wsn\" (UniqueName: \"kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn\") pod \"redhat-marketplace-646bt\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.966911 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:49 crc kubenswrapper[4861]: I0310 18:51:49.997051 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.025964 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.054825 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.055880 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.064532 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.163529 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.163916 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qn4\" (UniqueName: \"kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.163953 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.265700 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qn4\" (UniqueName: \"kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.265762 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.266218 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.266646 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.266659 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.283776 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qn4\" (UniqueName: \"kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4\") pod \"redhat-marketplace-rssg7\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.330507 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.331140 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.334980 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.347484 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.347671 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.370848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.370908 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.413571 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.471377 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.471561 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.471848 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.488744 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:51:50 crc kubenswrapper[4861]: W0310 18:51:50.496804 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568106eb_62be_48ec_80eb_ee69e26c5a06.slice/crio-ed43dfd37c2874a874c635e71fe1c6d813c32260adfbfad2dc9f13c6e5ec0ea9 WatchSource:0}: Error finding container ed43dfd37c2874a874c635e71fe1c6d813c32260adfbfad2dc9f13c6e5ec0ea9: Status 404 returned error can't find the container with id ed43dfd37c2874a874c635e71fe1c6d813c32260adfbfad2dc9f13c6e5ec0ea9 Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.513580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.549780 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.549966 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.625579 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.630518 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4d9gw" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.670332 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.703060 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.835912 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:50 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:50 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:50 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.836311 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.835982 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" event={"ID":"568106eb-62be-48ec-80eb-ee69e26c5a06","Type":"ContainerStarted","Data":"ed43dfd37c2874a874c635e71fe1c6d813c32260adfbfad2dc9f13c6e5ec0ea9"} Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.854840 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerStarted","Data":"05f41fc4b765d6669ab51c308552e667e2296cdda8f6f72f2cc4eda39206381c"} Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.862471 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.863739 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.869101 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.869737 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerStarted","Data":"4be108834b542d01902ea058416fa8fb32956ce45b039a90fe06c2e4d4f414bb"} Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.881163 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.881829 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.881864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.882026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcmd\" (UniqueName: \"kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.886936 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" event={"ID":"06c46a69-f46b-40ad-8ef8-0077f969d1f3","Type":"ContainerStarted","Data":"aa9bc07e3c54984f43cf6db4410d4299df52f3f7fc213ab2bd16334d8b52141d"} Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.994141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fcmd\" (UniqueName: \"kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.994184 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.994209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.995092 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:50 crc kubenswrapper[4861]: I0310 18:51:50.995530 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.019121 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fcmd\" (UniqueName: \"kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd\") pod \"redhat-operators-q2hgk\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.080941 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.081016 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.084157 4861 patch_prober.go:28] interesting pod/console-f9d7485db-zg9g7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.084210 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zg9g7" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.199083 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.278338 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.279332 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.291619 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.292910 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.306556 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z2w\" (UniqueName: \"kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.306598 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.306687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.407663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.408051 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8z2w\" (UniqueName: \"kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.408076 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.408251 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.409422 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.431569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8z2w\" (UniqueName: \"kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w\") pod \"redhat-operators-57xpw\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.491419 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-qsq94 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.491463 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qsq94" podUID="044ff649-6d8e-4b0b-bfaa-8018e00e105d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.491567 4861 patch_prober.go:28] interesting pod/downloads-7954f5f757-qsq94 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.491607 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qsq94" podUID="044ff649-6d8e-4b0b-bfaa-8018e00e105d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.546687 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:51:51 crc kubenswrapper[4861]: W0310 18:51:51.571220 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod876b4458_98a1_4dc2_af8a_3390a56cad59.slice/crio-69850f8c203930376fe471512330595032ea53e51de8e7d37c7eed5dc885f28e WatchSource:0}: Error finding container 69850f8c203930376fe471512330595032ea53e51de8e7d37c7eed5dc885f28e: Status 404 returned error can't find the container with id 69850f8c203930376fe471512330595032ea53e51de8e7d37c7eed5dc885f28e Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.608351 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.819874 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.823122 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:51 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:51 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:51 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.823170 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.870723 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.917011 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f14dc603-5347-42fb-b6c4-9e835ad09223","Type":"ContainerStarted","Data":"13771a6888ab5bbeab36e989f2d890a11009c9fab05252b47c4d03c3361c96b8"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.933295 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" event={"ID":"568106eb-62be-48ec-80eb-ee69e26c5a06","Type":"ContainerStarted","Data":"c027cb1cd56f872efdee5c913044d455f2b69d9849461e4dfe30073162bf3f25"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.934006 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.937585 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerID="2d4a97493728b50e837560de824592b36d804dde9a445f084d9110824d2ea31e" exitCode=0 Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.937638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerDied","Data":"2d4a97493728b50e837560de824592b36d804dde9a445f084d9110824d2ea31e"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.942878 4861 generic.go:334] "Generic (PLEG): container finished" podID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerID="a80f2825ea4a3fe8ab77fca76382e3a3afbc365126134d117941e8369df90264" exitCode=0 Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.942999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerDied","Data":"a80f2825ea4a3fe8ab77fca76382e3a3afbc365126134d117941e8369df90264"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.947044 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" event={"ID":"06c46a69-f46b-40ad-8ef8-0077f969d1f3","Type":"ContainerStarted","Data":"23e1e23410660d0e69a74ca3b48f8f9669d03eea7fadd33816c38deb95726ec2"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.948090 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.948339 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.959056 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" podStartSLOduration=4.959022464 podStartE2EDuration="4.959022464s" podCreationTimestamp="2026-03-10 18:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:51.955782372 +0000 UTC m=+255.719218342" watchObservedRunningTime="2026-03-10 18:51:51.959022464 +0000 UTC m=+255.722458424" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.961485 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.962559 4861 generic.go:334] "Generic (PLEG): container finished" podID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerID="05e72fc328cfcbce1a172ab772d29a6a63d39eecef2b19be5b91eddfda5ce588" exitCode=0 Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.962741 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerDied","Data":"05e72fc328cfcbce1a172ab772d29a6a63d39eecef2b19be5b91eddfda5ce588"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.962767 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerStarted","Data":"69850f8c203930376fe471512330595032ea53e51de8e7d37c7eed5dc885f28e"} Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.991792 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:51:51 crc kubenswrapper[4861]: I0310 18:51:51.991853 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:51.999461 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" podStartSLOduration=4.999438621 podStartE2EDuration="4.999438621s" podCreationTimestamp="2026-03-10 18:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:51:51.985420626 +0000 UTC m=+255.748856596" watchObservedRunningTime="2026-03-10 18:51:51.999438621 +0000 UTC m=+255.762874581" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.121579 4861 ???:1] "http: TLS handshake error from 192.168.126.11:59352: no serving certificate available for the kubelet" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.154526 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.155242 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.157582 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.157794 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.163017 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.212351 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.342875 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.342919 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.393587 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.445749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.445835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.446009 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.489297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.784791 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.823040 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:52 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:52 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:52 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.823108 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.969688 4861 generic.go:334] "Generic (PLEG): container finished" podID="f14dc603-5347-42fb-b6c4-9e835ad09223" containerID="7ab755247d0e7096c66a6034bf8238927edd681b9d9970dc0b407d41d8d3800f" exitCode=0 Mar 10 18:51:52 crc kubenswrapper[4861]: I0310 18:51:52.969735 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f14dc603-5347-42fb-b6c4-9e835ad09223","Type":"ContainerDied","Data":"7ab755247d0e7096c66a6034bf8238927edd681b9d9970dc0b407d41d8d3800f"} Mar 10 18:51:53 crc kubenswrapper[4861]: I0310 18:51:53.821485 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:53 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:53 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:53 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:53 crc kubenswrapper[4861]: I0310 18:51:53.821755 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:53 crc kubenswrapper[4861]: I0310 18:51:53.979334 4861 ???:1] "http: TLS handshake error from 192.168.126.11:59358: no serving certificate available for the kubelet" Mar 10 18:51:54 crc kubenswrapper[4861]: I0310 18:51:54.822522 4861 patch_prober.go:28] interesting pod/router-default-5444994796-rzhxp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 10 18:51:54 crc kubenswrapper[4861]: [-]has-synced failed: reason withheld Mar 10 18:51:54 crc kubenswrapper[4861]: [+]process-running ok Mar 10 18:51:54 crc kubenswrapper[4861]: healthz check failed Mar 10 18:51:54 crc kubenswrapper[4861]: I0310 18:51:54.822585 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rzhxp" podUID="cf512bc0-beb2-4784-ac61-7f7a22ccc3e9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 10 18:51:55 crc kubenswrapper[4861]: I0310 18:51:55.822374 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:55 crc kubenswrapper[4861]: I0310 18:51:55.825043 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rzhxp" Mar 10 18:51:56 crc kubenswrapper[4861]: I0310 18:51:56.696528 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kvtz7" Mar 10 18:51:57 crc kubenswrapper[4861]: I0310 18:51:57.266581 4861 ???:1] "http: TLS handshake error from 192.168.126.11:59372: no serving certificate available for the kubelet" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.136597 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552812-4d5sp"] Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.138326 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.143573 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.160900 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552812-4d5sp"] Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.199940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82c9h\" (UniqueName: \"kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h\") pod \"auto-csr-approver-29552812-4d5sp\" (UID: \"97fa138b-1e58-48b3-90ba-2ed750d3f4f1\") " pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.301685 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82c9h\" (UniqueName: \"kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h\") pod \"auto-csr-approver-29552812-4d5sp\" (UID: \"97fa138b-1e58-48b3-90ba-2ed750d3f4f1\") " pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.327626 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82c9h\" (UniqueName: \"kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h\") pod \"auto-csr-approver-29552812-4d5sp\" (UID: \"97fa138b-1e58-48b3-90ba-2ed750d3f4f1\") " pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:00 crc kubenswrapper[4861]: I0310 18:52:00.495125 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.096491 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.102297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 18:52:01 crc kubenswrapper[4861]: W0310 18:52:01.233505 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddad24f7d_329a_449e_bd62_372dfa9f838b.slice/crio-2ed5f404853ff54cd172ef92b43177c20a7992c4aad3519023e4072bd21dcdcb WatchSource:0}: Error finding container 2ed5f404853ff54cd172ef92b43177c20a7992c4aad3519023e4072bd21dcdcb: Status 404 returned error can't find the container with id 2ed5f404853ff54cd172ef92b43177c20a7992c4aad3519023e4072bd21dcdcb Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.272071 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.316402 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir\") pod \"f14dc603-5347-42fb-b6c4-9e835ad09223\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.316556 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access\") pod \"f14dc603-5347-42fb-b6c4-9e835ad09223\" (UID: \"f14dc603-5347-42fb-b6c4-9e835ad09223\") " Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.328934 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f14dc603-5347-42fb-b6c4-9e835ad09223" (UID: "f14dc603-5347-42fb-b6c4-9e835ad09223"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.329198 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f14dc603-5347-42fb-b6c4-9e835ad09223" (UID: "f14dc603-5347-42fb-b6c4-9e835ad09223"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.418525 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f14dc603-5347-42fb-b6c4-9e835ad09223-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.418584 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f14dc603-5347-42fb-b6c4-9e835ad09223-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:01 crc kubenswrapper[4861]: I0310 18:52:01.497322 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qsq94" Mar 10 18:52:02 crc kubenswrapper[4861]: I0310 18:52:02.017214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerStarted","Data":"2ed5f404853ff54cd172ef92b43177c20a7992c4aad3519023e4072bd21dcdcb"} Mar 10 18:52:02 crc kubenswrapper[4861]: I0310 18:52:02.019051 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 10 18:52:02 crc kubenswrapper[4861]: I0310 18:52:02.021092 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f14dc603-5347-42fb-b6c4-9e835ad09223","Type":"ContainerDied","Data":"13771a6888ab5bbeab36e989f2d890a11009c9fab05252b47c4d03c3361c96b8"} Mar 10 18:52:02 crc kubenswrapper[4861]: I0310 18:52:02.021142 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13771a6888ab5bbeab36e989f2d890a11009c9fab05252b47c4d03c3361c96b8" Mar 10 18:52:03 crc kubenswrapper[4861]: E0310 18:52:03.687371 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 10 18:52:03 crc kubenswrapper[4861]: E0310 18:52:03.687861 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 18:52:03 crc kubenswrapper[4861]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 10 18:52:03 crc kubenswrapper[4861]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5x9pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29552810-wv88x_openshift-infra(19e950a0-6f71-44af-b995-f1ef1be6edbb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 10 18:52:03 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 18:52:03 crc kubenswrapper[4861]: E0310 18:52:03.689015 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29552810-wv88x" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" Mar 10 18:52:04 crc kubenswrapper[4861]: E0310 18:52:04.028825 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29552810-wv88x" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" Mar 10 18:52:06 crc kubenswrapper[4861]: I0310 18:52:06.811500 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:52:06 crc kubenswrapper[4861]: I0310 18:52:06.812060 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerName="controller-manager" containerID="cri-o://c027cb1cd56f872efdee5c913044d455f2b69d9849461e4dfe30073162bf3f25" gracePeriod=30 Mar 10 18:52:06 crc kubenswrapper[4861]: I0310 18:52:06.816469 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:52:06 crc kubenswrapper[4861]: I0310 18:52:06.816836 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerName="route-controller-manager" containerID="cri-o://23e1e23410660d0e69a74ca3b48f8f9669d03eea7fadd33816c38deb95726ec2" gracePeriod=30 Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.053995 4861 generic.go:334] "Generic (PLEG): container finished" podID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerID="23e1e23410660d0e69a74ca3b48f8f9669d03eea7fadd33816c38deb95726ec2" exitCode=0 Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.054136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" event={"ID":"06c46a69-f46b-40ad-8ef8-0077f969d1f3","Type":"ContainerDied","Data":"23e1e23410660d0e69a74ca3b48f8f9669d03eea7fadd33816c38deb95726ec2"} Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.055784 4861 generic.go:334] "Generic (PLEG): container finished" podID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerID="c027cb1cd56f872efdee5c913044d455f2b69d9849461e4dfe30073162bf3f25" exitCode=0 Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.055817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" event={"ID":"568106eb-62be-48ec-80eb-ee69e26c5a06","Type":"ContainerDied","Data":"c027cb1cd56f872efdee5c913044d455f2b69d9849461e4dfe30073162bf3f25"} Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.239937 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 10 18:52:08 crc kubenswrapper[4861]: I0310 18:52:08.673988 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:52:09 crc kubenswrapper[4861]: E0310 18:52:09.479026 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 10 18:52:09 crc kubenswrapper[4861]: E0310 18:52:09.479560 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j67kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-65ndj_openshift-marketplace(832146f2-ed86-4794-a676-13d3df8679ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 10 18:52:09 crc kubenswrapper[4861]: E0310 18:52:09.481233 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-65ndj" podUID="832146f2-ed86-4794-a676-13d3df8679ad" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.840284 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.840471 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lghk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pmtqc_openshift-marketplace(edc16f3e-454b-4167-9d26-c50bba23281e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.841633 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pmtqc" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.897661 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.898041 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhjx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sj857_openshift-marketplace(1a9210c4-9579-4cfc-bf99-b652c3af6915): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.899175 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sj857" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.906136 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.906323 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fckjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hw926_openshift-marketplace(c76065df-dec9-4b14-bd49-8e2d134bf53f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 10 18:52:10 crc kubenswrapper[4861]: E0310 18:52:10.907557 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hw926" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" Mar 10 18:52:10 crc kubenswrapper[4861]: I0310 18:52:10.967639 4861 patch_prober.go:28] interesting pod/route-controller-manager-5f4b956bb9-4xs89 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:52:10 crc kubenswrapper[4861]: I0310 18:52:10.967734 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:52:10 crc kubenswrapper[4861]: I0310 18:52:10.969999 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552812-4d5sp"] Mar 10 18:52:10 crc kubenswrapper[4861]: I0310 18:52:10.998774 4861 patch_prober.go:28] interesting pod/controller-manager-57895bf995-rsd89 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 10 18:52:10 crc kubenswrapper[4861]: I0310 18:52:10.998817 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 18:52:13 crc kubenswrapper[4861]: W0310 18:52:13.268279 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod41593fdb_2327_42db_b334_4418927f0367.slice/crio-25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203 WatchSource:0}: Error finding container 25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203: Status 404 returned error can't find the container with id 25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203 Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.268823 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sj857" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.268854 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-65ndj" podUID="832146f2-ed86-4794-a676-13d3df8679ad" Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.268920 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pmtqc" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.269024 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hw926" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" Mar 10 18:52:13 crc kubenswrapper[4861]: W0310 18:52:13.277091 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97fa138b_1e58_48b3_90ba_2ed750d3f4f1.slice/crio-bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd WatchSource:0}: Error finding container bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd: Status 404 returned error can't find the container with id bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.370004 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.377988 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.407968 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.408210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerName="controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408224 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerName="controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.408238 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerName="route-controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408247 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerName="route-controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: E0310 18:52:13.408257 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14dc603-5347-42fb-b6c4-9e835ad09223" containerName="pruner" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408265 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14dc603-5347-42fb-b6c4-9e835ad09223" containerName="pruner" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408395 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14dc603-5347-42fb-b6c4-9e835ad09223" containerName="pruner" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408410 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" containerName="controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408420 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" containerName="route-controller-manager" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.408839 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.426215 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.502680 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca\") pod \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.502828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles\") pod \"568106eb-62be-48ec-80eb-ee69e26c5a06\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.502964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config\") pod \"568106eb-62be-48ec-80eb-ee69e26c5a06\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.503818 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "568106eb-62be-48ec-80eb-ee69e26c5a06" (UID: "568106eb-62be-48ec-80eb-ee69e26c5a06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.503817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "06c46a69-f46b-40ad-8ef8-0077f969d1f3" (UID: "06c46a69-f46b-40ad-8ef8-0077f969d1f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.504111 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert\") pod \"568106eb-62be-48ec-80eb-ee69e26c5a06\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.504210 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk6f7\" (UniqueName: \"kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7\") pod \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.504285 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config" (OuterVolumeSpecName: "config") pod "568106eb-62be-48ec-80eb-ee69e26c5a06" (UID: "568106eb-62be-48ec-80eb-ee69e26c5a06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.504380 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnzl\" (UniqueName: \"kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl\") pod \"568106eb-62be-48ec-80eb-ee69e26c5a06\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505046 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca\") pod \"568106eb-62be-48ec-80eb-ee69e26c5a06\" (UID: \"568106eb-62be-48ec-80eb-ee69e26c5a06\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505095 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert\") pod \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505145 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config\") pod \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\" (UID: \"06c46a69-f46b-40ad-8ef8-0077f969d1f3\") " Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505490 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2x9\" (UniqueName: \"kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505547 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca" (OuterVolumeSpecName: "client-ca") pod "568106eb-62be-48ec-80eb-ee69e26c5a06" (UID: "568106eb-62be-48ec-80eb-ee69e26c5a06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505787 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505843 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.505974 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.506050 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.506063 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.506073 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.506084 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568106eb-62be-48ec-80eb-ee69e26c5a06-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.506607 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config" (OuterVolumeSpecName: "config") pod "06c46a69-f46b-40ad-8ef8-0077f969d1f3" (UID: "06c46a69-f46b-40ad-8ef8-0077f969d1f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.510260 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "568106eb-62be-48ec-80eb-ee69e26c5a06" (UID: "568106eb-62be-48ec-80eb-ee69e26c5a06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.510851 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7" (OuterVolumeSpecName: "kube-api-access-tk6f7") pod "06c46a69-f46b-40ad-8ef8-0077f969d1f3" (UID: "06c46a69-f46b-40ad-8ef8-0077f969d1f3"). InnerVolumeSpecName "kube-api-access-tk6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.511213 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl" (OuterVolumeSpecName: "kube-api-access-mxnzl") pod "568106eb-62be-48ec-80eb-ee69e26c5a06" (UID: "568106eb-62be-48ec-80eb-ee69e26c5a06"). InnerVolumeSpecName "kube-api-access-mxnzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.511822 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06c46a69-f46b-40ad-8ef8-0077f969d1f3" (UID: "06c46a69-f46b-40ad-8ef8-0077f969d1f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.607731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.608230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.608425 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.608802 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2x9\" (UniqueName: \"kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.608991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609281 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568106eb-62be-48ec-80eb-ee69e26c5a06-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609434 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk6f7\" (UniqueName: \"kubernetes.io/projected/06c46a69-f46b-40ad-8ef8-0077f969d1f3-kube-api-access-tk6f7\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609568 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnzl\" (UniqueName: \"kubernetes.io/projected/568106eb-62be-48ec-80eb-ee69e26c5a06-kube-api-access-mxnzl\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609692 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c46a69-f46b-40ad-8ef8-0077f969d1f3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609892 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06c46a69-f46b-40ad-8ef8-0077f969d1f3-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.609072 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.610317 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.611111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.616620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.649263 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2x9\" (UniqueName: \"kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9\") pod \"controller-manager-84994c9698-clqx6\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:13 crc kubenswrapper[4861]: I0310 18:52:13.730310 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.087775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" event={"ID":"97fa138b-1e58-48b3-90ba-2ed750d3f4f1","Type":"ContainerStarted","Data":"bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd"} Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.089940 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" event={"ID":"06c46a69-f46b-40ad-8ef8-0077f969d1f3","Type":"ContainerDied","Data":"aa9bc07e3c54984f43cf6db4410d4299df52f3f7fc213ab2bd16334d8b52141d"} Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.089982 4861 scope.go:117] "RemoveContainer" containerID="23e1e23410660d0e69a74ca3b48f8f9669d03eea7fadd33816c38deb95726ec2" Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.090008 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89" Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.091299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41593fdb-2327-42db-b334-4418927f0367","Type":"ContainerStarted","Data":"25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203"} Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.093757 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" event={"ID":"568106eb-62be-48ec-80eb-ee69e26c5a06","Type":"ContainerDied","Data":"ed43dfd37c2874a874c635e71fe1c6d813c32260adfbfad2dc9f13c6e5ec0ea9"} Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.093878 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57895bf995-rsd89" Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.122369 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.129413 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f4b956bb9-4xs89"] Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.139615 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:52:14 crc kubenswrapper[4861]: I0310 18:52:14.142213 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57895bf995-rsd89"] Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:14.964162 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c46a69-f46b-40ad-8ef8-0077f969d1f3" path="/var/lib/kubelet/pods/06c46a69-f46b-40ad-8ef8-0077f969d1f3/volumes" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:14.964629 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568106eb-62be-48ec-80eb-ee69e26c5a06" path="/var/lib/kubelet/pods/568106eb-62be-48ec-80eb-ee69e26c5a06/volumes" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.636211 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.637217 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.639241 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.639793 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.640188 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.640194 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.640379 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.640565 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.645852 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.742587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.742619 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.742649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bqqp\" (UniqueName: \"kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.742668 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.843645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.843706 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.843848 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bqqp\" (UniqueName: \"kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.843887 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.845051 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.845146 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.861759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.862569 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bqqp\" (UniqueName: \"kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp\") pod \"route-controller-manager-7cb99fc474-b95xm\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:16 crc kubenswrapper[4861]: I0310 18:52:15.955389 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:17 crc kubenswrapper[4861]: I0310 18:52:17.779773 4861 ???:1] "http: TLS handshake error from 192.168.126.11:51608: no serving certificate available for the kubelet" Mar 10 18:52:21 crc kubenswrapper[4861]: E0310 18:52:21.576810 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 10 18:52:21 crc kubenswrapper[4861]: E0310 18:52:21.577038 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fcmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-q2hgk_openshift-marketplace(876b4458-98a1-4dc2-af8a-3390a56cad59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 10 18:52:21 crc kubenswrapper[4861]: E0310 18:52:21.578347 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-q2hgk" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" Mar 10 18:52:21 crc kubenswrapper[4861]: I0310 18:52:21.666904 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9x9tv" Mar 10 18:52:21 crc kubenswrapper[4861]: I0310 18:52:21.991804 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:52:21 crc kubenswrapper[4861]: I0310 18:52:21.992202 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.752579 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.753945 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.785911 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.870988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.871104 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.973258 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.973418 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:23 crc kubenswrapper[4861]: I0310 18:52:23.973642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:24 crc kubenswrapper[4861]: I0310 18:52:24.007761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:24 crc kubenswrapper[4861]: I0310 18:52:24.093782 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:24 crc kubenswrapper[4861]: E0310 18:52:24.817478 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-q2hgk" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" Mar 10 18:52:24 crc kubenswrapper[4861]: I0310 18:52:24.927577 4861 scope.go:117] "RemoveContainer" containerID="c027cb1cd56f872efdee5c913044d455f2b69d9849461e4dfe30073162bf3f25" Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.068760 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:26 crc kubenswrapper[4861]: W0310 18:52:26.101778 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a39c36_1d01_4d26_a604_4c4739bf2e11.slice/crio-bc7d1c027878dbf151467bc22254e792c38354b02470f801cc8fa1343da524ca WatchSource:0}: Error finding container bc7d1c027878dbf151467bc22254e792c38354b02470f801cc8fa1343da524ca: Status 404 returned error can't find the container with id bc7d1c027878dbf151467bc22254e792c38354b02470f801cc8fa1343da524ca Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.185460 4861 generic.go:334] "Generic (PLEG): container finished" podID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerID="79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49" exitCode=0 Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.185965 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerDied","Data":"79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49"} Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.212948 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" event={"ID":"74a39c36-1d01-4d26-a604-4c4739bf2e11","Type":"ContainerStarted","Data":"bc7d1c027878dbf151467bc22254e792c38354b02470f801cc8fa1343da524ca"} Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.274825 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.294724 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.870438 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:26 crc kubenswrapper[4861]: I0310 18:52:26.980189 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.169581 4861 csr.go:261] certificate signing request csr-kxtlw is approved, waiting to be issued Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.175441 4861 csr.go:257] certificate signing request csr-kxtlw is issued Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.223460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" event={"ID":"74a39c36-1d01-4d26-a604-4c4739bf2e11","Type":"ContainerStarted","Data":"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.225368 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.231318 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerID="31bab73fd7f08e1e1b433ea5699ceaef28f723caa8436e845e3193fb5750f65e" exitCode=0 Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.231413 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerDied","Data":"31bab73fd7f08e1e1b433ea5699ceaef28f723caa8436e845e3193fb5750f65e"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.231558 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.238121 4861 generic.go:334] "Generic (PLEG): container finished" podID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerID="21ac9af81e4fcbfe3ad2d08d16a0191b26f7711c1ef5e01fd0a732ee7c43415d" exitCode=0 Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.238199 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerDied","Data":"21ac9af81e4fcbfe3ad2d08d16a0191b26f7711c1ef5e01fd0a732ee7c43415d"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.244017 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"96ae9808-899d-4113-8fed-20e33cbe49ec","Type":"ContainerStarted","Data":"b4160f38e6404e575f42c0c8247446105090660f2d65ac6ae3b49125032c5af0"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.244073 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"96ae9808-899d-4113-8fed-20e33cbe49ec","Type":"ContainerStarted","Data":"da90d7fd2b3d86af7e83ceb0133da8c600a8126870b84c0a4104a14b45206d9f"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.245055 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" podStartSLOduration=21.245037552 podStartE2EDuration="21.245037552s" podCreationTimestamp="2026-03-10 18:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:27.24465862 +0000 UTC m=+291.008094590" watchObservedRunningTime="2026-03-10 18:52:27.245037552 +0000 UTC m=+291.008473512" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.256523 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerStarted","Data":"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.261561 4861 generic.go:334] "Generic (PLEG): container finished" podID="edc16f3e-454b-4167-9d26-c50bba23281e" containerID="4385ded0c7aaf97a76f7c00bbf95f00a0760df7cfe521c4a035b891175fc9a04" exitCode=0 Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.261687 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerDied","Data":"4385ded0c7aaf97a76f7c00bbf95f00a0760df7cfe521c4a035b891175fc9a04"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.268945 4861 generic.go:334] "Generic (PLEG): container finished" podID="97fa138b-1e58-48b3-90ba-2ed750d3f4f1" containerID="9cf9e0fb9edc51e0bdf3b5c5aa984d6fd2c69729cd604dbbf10bb1f52a107c9d" exitCode=0 Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.269047 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" event={"ID":"97fa138b-1e58-48b3-90ba-2ed750d3f4f1","Type":"ContainerDied","Data":"9cf9e0fb9edc51e0bdf3b5c5aa984d6fd2c69729cd604dbbf10bb1f52a107c9d"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.270505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" event={"ID":"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c","Type":"ContainerStarted","Data":"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.270535 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" event={"ID":"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c","Type":"ContainerStarted","Data":"9d06fc5419a5eee0a208745121e1d043a904661d43af21c6a64c0bdcdd8a8b5f"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.271179 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.274999 4861 generic.go:334] "Generic (PLEG): container finished" podID="41593fdb-2327-42db-b334-4418927f0367" containerID="55499bf54f5176526dd1dbfe19fe01728fa3f50519296e7bba93ac879d1f3956" exitCode=0 Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.275217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41593fdb-2327-42db-b334-4418927f0367","Type":"ContainerDied","Data":"55499bf54f5176526dd1dbfe19fe01728fa3f50519296e7bba93ac879d1f3956"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.277672 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.278913 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.278892325 podStartE2EDuration="4.278892325s" podCreationTimestamp="2026-03-10 18:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:27.277976007 +0000 UTC m=+291.041411977" watchObservedRunningTime="2026-03-10 18:52:27.278892325 +0000 UTC m=+291.042328285" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.283921 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552810-wv88x" event={"ID":"19e950a0-6f71-44af-b995-f1ef1be6edbb","Type":"ContainerStarted","Data":"064d1edf204dd6d1ed8985be957be1f27e42f06d7820fae56f6ca7bd9b80c3e2"} Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.337615 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552810-wv88x" podStartSLOduration=104.705437154 podStartE2EDuration="2m27.337595776s" podCreationTimestamp="2026-03-10 18:50:00 +0000 UTC" firstStartedPulling="2026-03-10 18:51:43.438136707 +0000 UTC m=+247.201572667" lastFinishedPulling="2026-03-10 18:52:26.070295329 +0000 UTC m=+289.833731289" observedRunningTime="2026-03-10 18:52:27.334920958 +0000 UTC m=+291.098356918" watchObservedRunningTime="2026-03-10 18:52:27.337595776 +0000 UTC m=+291.101031736" Mar 10 18:52:27 crc kubenswrapper[4861]: I0310 18:52:27.349267 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" podStartSLOduration=21.349253698 podStartE2EDuration="21.349253698s" podCreationTimestamp="2026-03-10 18:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:27.347161486 +0000 UTC m=+291.110597446" watchObservedRunningTime="2026-03-10 18:52:27.349253698 +0000 UTC m=+291.112689658" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.177245 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-18 00:58:27.409599519 +0000 UTC Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.177570 4861 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7518h5m59.232032005s for next certificate rotation Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.292800 4861 generic.go:334] "Generic (PLEG): container finished" podID="96ae9808-899d-4113-8fed-20e33cbe49ec" containerID="b4160f38e6404e575f42c0c8247446105090660f2d65ac6ae3b49125032c5af0" exitCode=0 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.292912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"96ae9808-899d-4113-8fed-20e33cbe49ec","Type":"ContainerDied","Data":"b4160f38e6404e575f42c0c8247446105090660f2d65ac6ae3b49125032c5af0"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.296050 4861 generic.go:334] "Generic (PLEG): container finished" podID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerID="01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f" exitCode=0 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.296123 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerDied","Data":"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.298074 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerStarted","Data":"8228718be0002a5ecc4ba8dbaec34283f30812431d34664a126be79d821b5589"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.298996 4861 generic.go:334] "Generic (PLEG): container finished" podID="19e950a0-6f71-44af-b995-f1ef1be6edbb" containerID="064d1edf204dd6d1ed8985be957be1f27e42f06d7820fae56f6ca7bd9b80c3e2" exitCode=0 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.300399 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552810-wv88x" event={"ID":"19e950a0-6f71-44af-b995-f1ef1be6edbb","Type":"ContainerDied","Data":"064d1edf204dd6d1ed8985be957be1f27e42f06d7820fae56f6ca7bd9b80c3e2"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.302875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerStarted","Data":"4e12b40346a7176f84ae4be91f0b485eaab5f60de20461b9e51f58cd79f06d05"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.304171 4861 generic.go:334] "Generic (PLEG): container finished" podID="832146f2-ed86-4794-a676-13d3df8679ad" containerID="887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23" exitCode=0 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.304222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerDied","Data":"887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.312775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerStarted","Data":"c52c3de48252a9f9444d7de9d4972d31c457e491df77d8561ee93fec0b51bb97"} Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.312788 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" podUID="74a39c36-1d01-4d26-a604-4c4739bf2e11" containerName="controller-manager" containerID="cri-o://f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108" gracePeriod=30 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.313118 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" podUID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" containerName="route-controller-manager" containerID="cri-o://5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1" gracePeriod=30 Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.367514 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-646bt" podStartSLOduration=9.41682564 podStartE2EDuration="39.367497942s" podCreationTimestamp="2026-03-10 18:51:49 +0000 UTC" firstStartedPulling="2026-03-10 18:51:57.742455008 +0000 UTC m=+261.505890968" lastFinishedPulling="2026-03-10 18:52:27.69312731 +0000 UTC m=+291.456563270" observedRunningTime="2026-03-10 18:52:28.365738951 +0000 UTC m=+292.129174931" watchObservedRunningTime="2026-03-10 18:52:28.367497942 +0000 UTC m=+292.130933892" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.435577 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pmtqc" podStartSLOduration=2.38270597 podStartE2EDuration="41.435556078s" podCreationTimestamp="2026-03-10 18:51:47 +0000 UTC" firstStartedPulling="2026-03-10 18:51:48.745067404 +0000 UTC m=+252.508503364" lastFinishedPulling="2026-03-10 18:52:27.797917512 +0000 UTC m=+291.561353472" observedRunningTime="2026-03-10 18:52:28.434350942 +0000 UTC m=+292.197786902" watchObservedRunningTime="2026-03-10 18:52:28.435556078 +0000 UTC m=+292.198992038" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.466838 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rssg7" podStartSLOduration=8.5484622 podStartE2EDuration="38.466816374s" podCreationTimestamp="2026-03-10 18:51:50 +0000 UTC" firstStartedPulling="2026-03-10 18:51:57.742465328 +0000 UTC m=+261.505901288" lastFinishedPulling="2026-03-10 18:52:27.660819502 +0000 UTC m=+291.424255462" observedRunningTime="2026-03-10 18:52:28.46566737 +0000 UTC m=+292.229103340" watchObservedRunningTime="2026-03-10 18:52:28.466816374 +0000 UTC m=+292.230252334" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.530700 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.531362 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.548347 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.662795 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.663099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.663551 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.674029 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767070 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82c9h\" (UniqueName: \"kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h\") pod \"97fa138b-1e58-48b3-90ba-2ed750d3f4f1\" (UID: \"97fa138b-1e58-48b3-90ba-2ed750d3f4f1\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767531 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767681 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767952 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.767996 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.778073 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h" (OuterVolumeSpecName: "kube-api-access-82c9h") pod "97fa138b-1e58-48b3-90ba-2ed750d3f4f1" (UID: "97fa138b-1e58-48b3-90ba-2ed750d3f4f1"). InnerVolumeSpecName "kube-api-access-82c9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.779264 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.787353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access\") pod \"installer-9-crc\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.836075 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.850255 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.862950 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.868124 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir\") pod \"41593fdb-2327-42db-b334-4418927f0367\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.868169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access\") pod \"41593fdb-2327-42db-b334-4418927f0367\" (UID: \"41593fdb-2327-42db-b334-4418927f0367\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.868231 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41593fdb-2327-42db-b334-4418927f0367" (UID: "41593fdb-2327-42db-b334-4418927f0367"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.868457 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41593fdb-2327-42db-b334-4418927f0367-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.868474 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82c9h\" (UniqueName: \"kubernetes.io/projected/97fa138b-1e58-48b3-90ba-2ed750d3f4f1-kube-api-access-82c9h\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.880096 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41593fdb-2327-42db-b334-4418927f0367" (UID: "41593fdb-2327-42db-b334-4418927f0367"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.971272 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca\") pod \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.971344 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bqqp\" (UniqueName: \"kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp\") pod \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.971367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles\") pod \"74a39c36-1d01-4d26-a604-4c4739bf2e11\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.971401 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert\") pod \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.971460 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2x9\" (UniqueName: \"kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9\") pod \"74a39c36-1d01-4d26-a604-4c4739bf2e11\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972037 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert\") pod \"74a39c36-1d01-4d26-a604-4c4739bf2e11\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972059 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca\") pod \"74a39c36-1d01-4d26-a604-4c4739bf2e11\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972096 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config\") pod \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\" (UID: \"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972117 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config\") pod \"74a39c36-1d01-4d26-a604-4c4739bf2e11\" (UID: \"74a39c36-1d01-4d26-a604-4c4739bf2e11\") " Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972302 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74a39c36-1d01-4d26-a604-4c4739bf2e11" (UID: "74a39c36-1d01-4d26-a604-4c4739bf2e11"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972360 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41593fdb-2327-42db-b334-4418927f0367-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972776 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca" (OuterVolumeSpecName: "client-ca") pod "74a39c36-1d01-4d26-a604-4c4739bf2e11" (UID: "74a39c36-1d01-4d26-a604-4c4739bf2e11"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.972828 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config" (OuterVolumeSpecName: "config") pod "74a39c36-1d01-4d26-a604-4c4739bf2e11" (UID: "74a39c36-1d01-4d26-a604-4c4739bf2e11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.973074 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" (UID: "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.973192 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config" (OuterVolumeSpecName: "config") pod "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" (UID: "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.980874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp" (OuterVolumeSpecName: "kube-api-access-9bqqp") pod "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" (UID: "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c"). InnerVolumeSpecName "kube-api-access-9bqqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.980983 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" (UID: "6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.981078 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74a39c36-1d01-4d26-a604-4c4739bf2e11" (UID: "74a39c36-1d01-4d26-a604-4c4739bf2e11"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:52:28 crc kubenswrapper[4861]: I0310 18:52:28.981142 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9" (OuterVolumeSpecName: "kube-api-access-5s2x9") pod "74a39c36-1d01-4d26-a604-4c4739bf2e11" (UID: "74a39c36-1d01-4d26-a604-4c4739bf2e11"). InnerVolumeSpecName "kube-api-access-5s2x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073185 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s2x9\" (UniqueName: \"kubernetes.io/projected/74a39c36-1d01-4d26-a604-4c4739bf2e11-kube-api-access-5s2x9\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073476 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a39c36-1d01-4d26-a604-4c4739bf2e11-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073486 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073496 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073504 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073511 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073520 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bqqp\" (UniqueName: \"kubernetes.io/projected/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-kube-api-access-9bqqp\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073529 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74a39c36-1d01-4d26-a604-4c4739bf2e11-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.073537 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.320409 4861 generic.go:334] "Generic (PLEG): container finished" podID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" containerID="5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1" exitCode=0 Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.320452 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.320467 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" event={"ID":"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c","Type":"ContainerDied","Data":"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.320498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm" event={"ID":"6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c","Type":"ContainerDied","Data":"9d06fc5419a5eee0a208745121e1d043a904661d43af21c6a64c0bdcdd8a8b5f"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.320513 4861 scope.go:117] "RemoveContainer" containerID="5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.322366 4861 generic.go:334] "Generic (PLEG): container finished" podID="74a39c36-1d01-4d26-a604-4c4739bf2e11" containerID="f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108" exitCode=0 Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.322442 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.323337 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" event={"ID":"74a39c36-1d01-4d26-a604-4c4739bf2e11","Type":"ContainerDied","Data":"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.323353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84994c9698-clqx6" event={"ID":"74a39c36-1d01-4d26-a604-4c4739bf2e11","Type":"ContainerDied","Data":"bc7d1c027878dbf151467bc22254e792c38354b02470f801cc8fa1343da524ca"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.331202 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerStarted","Data":"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.335881 4861 scope.go:117] "RemoveContainer" containerID="5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1" Mar 10 18:52:29 crc kubenswrapper[4861]: E0310 18:52:29.337420 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1\": container with ID starting with 5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1 not found: ID does not exist" containerID="5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.337454 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1"} err="failed to get container status \"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1\": rpc error: code = NotFound desc = could not find container \"5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1\": container with ID starting with 5d26cda45b487bff039146b3268d4821153047467653f55853084422192804e1 not found: ID does not exist" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.337476 4861 scope.go:117] "RemoveContainer" containerID="f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.338134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerStarted","Data":"399b3830d2782fff664b310af9d4f487603b6c4262aba19f4cccaf2f5c6af8ca"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.339551 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.341394 4861 generic.go:334] "Generic (PLEG): container finished" podID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerID="af3936d6228941727c3c59b023c3567fd3f97fc0f054efce14349d157c9836a6" exitCode=0 Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.341449 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerDied","Data":"af3936d6228941727c3c59b023c3567fd3f97fc0f054efce14349d157c9836a6"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.354227 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb99fc474-b95xm"] Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.355434 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerStarted","Data":"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.356211 4861 scope.go:117] "RemoveContainer" containerID="f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108" Mar 10 18:52:29 crc kubenswrapper[4861]: E0310 18:52:29.356573 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108\": container with ID starting with f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108 not found: ID does not exist" containerID="f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.356605 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108"} err="failed to get container status \"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108\": rpc error: code = NotFound desc = could not find container \"f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108\": container with ID starting with f25466a324ef5b88e6a5e4a96af724019aa7f136938aa3bf11430dfbcf5c3108 not found: ID does not exist" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.357148 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.357564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"41593fdb-2327-42db-b334-4418927f0367","Type":"ContainerDied","Data":"25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.357610 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25363642c273f068985e79cca732b6a2ff955f416d404bd42bf08e53ae00f203" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.370606 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65ndj" podStartSLOduration=2.41068632 podStartE2EDuration="41.370589902s" podCreationTimestamp="2026-03-10 18:51:48 +0000 UTC" firstStartedPulling="2026-03-10 18:51:49.794882046 +0000 UTC m=+253.558318006" lastFinishedPulling="2026-03-10 18:52:28.754785628 +0000 UTC m=+292.518221588" observedRunningTime="2026-03-10 18:52:29.368181412 +0000 UTC m=+293.131617362" watchObservedRunningTime="2026-03-10 18:52:29.370589902 +0000 UTC m=+293.134025862" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.371215 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.371265 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552812-4d5sp" event={"ID":"97fa138b-1e58-48b3-90ba-2ed750d3f4f1","Type":"ContainerDied","Data":"bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd"} Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.371303 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc8bd0331722f67dd0ce9823f7ab0f1cc913ad73695378483722763cc37000bd" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.389997 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 10 18:52:29 crc kubenswrapper[4861]: W0310 18:52:29.425525 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbcc3a2e0_9115_453c_8af2_6001abd4012f.slice/crio-da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333 WatchSource:0}: Error finding container da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333: Status 404 returned error can't find the container with id da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333 Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.437913 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.441282 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84994c9698-clqx6"] Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.457798 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57xpw" podStartSLOduration=35.838303296 podStartE2EDuration="38.457776698s" podCreationTimestamp="2026-03-10 18:51:51 +0000 UTC" firstStartedPulling="2026-03-10 18:52:26.241614171 +0000 UTC m=+290.005050131" lastFinishedPulling="2026-03-10 18:52:28.861087573 +0000 UTC m=+292.624523533" observedRunningTime="2026-03-10 18:52:29.45540698 +0000 UTC m=+293.218842940" watchObservedRunningTime="2026-03-10 18:52:29.457776698 +0000 UTC m=+293.221212658" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.620561 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.680065 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9pz\" (UniqueName: \"kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz\") pod \"19e950a0-6f71-44af-b995-f1ef1be6edbb\" (UID: \"19e950a0-6f71-44af-b995-f1ef1be6edbb\") " Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.687594 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz" (OuterVolumeSpecName: "kube-api-access-5x9pz") pod "19e950a0-6f71-44af-b995-f1ef1be6edbb" (UID: "19e950a0-6f71-44af-b995-f1ef1be6edbb"). InnerVolumeSpecName "kube-api-access-5x9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.766338 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.781738 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9pz\" (UniqueName: \"kubernetes.io/projected/19e950a0-6f71-44af-b995-f1ef1be6edbb-kube-api-access-5x9pz\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.882339 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir\") pod \"96ae9808-899d-4113-8fed-20e33cbe49ec\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.882408 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access\") pod \"96ae9808-899d-4113-8fed-20e33cbe49ec\" (UID: \"96ae9808-899d-4113-8fed-20e33cbe49ec\") " Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.882448 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "96ae9808-899d-4113-8fed-20e33cbe49ec" (UID: "96ae9808-899d-4113-8fed-20e33cbe49ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.882642 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ae9808-899d-4113-8fed-20e33cbe49ec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.888905 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "96ae9808-899d-4113-8fed-20e33cbe49ec" (UID: "96ae9808-899d-4113-8fed-20e33cbe49ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:29 crc kubenswrapper[4861]: I0310 18:52:29.983613 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96ae9808-899d-4113-8fed-20e33cbe49ec-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.026101 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.026235 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.378727 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerID="399b3830d2782fff664b310af9d4f487603b6c4262aba19f4cccaf2f5c6af8ca" exitCode=0 Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.378745 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerDied","Data":"399b3830d2782fff664b310af9d4f487603b6c4262aba19f4cccaf2f5c6af8ca"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.381481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerStarted","Data":"127551d9904d53fc28ea6d014a574f56d6e5c500474fdd4ac5795d712fe8e5f4"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.382749 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"96ae9808-899d-4113-8fed-20e33cbe49ec","Type":"ContainerDied","Data":"da90d7fd2b3d86af7e83ceb0133da8c600a8126870b84c0a4104a14b45206d9f"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.382774 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da90d7fd2b3d86af7e83ceb0133da8c600a8126870b84c0a4104a14b45206d9f" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.382822 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.385336 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bcc3a2e0-9115-453c-8af2-6001abd4012f","Type":"ContainerStarted","Data":"570ba78bbfb3c52b41763672697fc6adb6a476c30304659bb8719b05bcaf5522"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.385360 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bcc3a2e0-9115-453c-8af2-6001abd4012f","Type":"ContainerStarted","Data":"da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.386700 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552810-wv88x" event={"ID":"19e950a0-6f71-44af-b995-f1ef1be6edbb","Type":"ContainerDied","Data":"ec5addcc268a61d9898ac2cce9e91769a58f85b2df0ec1c03a37b1c135ef45b8"} Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.386768 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5addcc268a61d9898ac2cce9e91769a58f85b2df0ec1c03a37b1c135ef45b8" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.386773 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552810-wv88x" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.414352 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.414739 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.418316 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.41830406 podStartE2EDuration="2.41830406s" podCreationTimestamp="2026-03-10 18:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:30.417636981 +0000 UTC m=+294.181072941" watchObservedRunningTime="2026-03-10 18:52:30.41830406 +0000 UTC m=+294.181740020" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.441478 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hw926" podStartSLOduration=3.349339008 podStartE2EDuration="43.44146098s" podCreationTimestamp="2026-03-10 18:51:47 +0000 UTC" firstStartedPulling="2026-03-10 18:51:49.777291005 +0000 UTC m=+253.540726965" lastFinishedPulling="2026-03-10 18:52:29.869412977 +0000 UTC m=+293.632848937" observedRunningTime="2026-03-10 18:52:30.438849164 +0000 UTC m=+294.202285124" watchObservedRunningTime="2026-03-10 18:52:30.44146098 +0000 UTC m=+294.204896930" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.459893 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.964829 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" path="/var/lib/kubelet/pods/6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c/volumes" Mar 10 18:52:30 crc kubenswrapper[4861]: I0310 18:52:30.965885 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a39c36-1d01-4d26-a604-4c4739bf2e11" path="/var/lib/kubelet/pods/74a39c36-1d01-4d26-a604-4c4739bf2e11/volumes" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.180771 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-646bt" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="registry-server" probeResult="failure" output=< Mar 10 18:52:31 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 18:52:31 crc kubenswrapper[4861]: > Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.394183 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerStarted","Data":"f0dd32b9f3ed5fc49372b1263d3a5fca69a7a585c211ca524ff49e4084930511"} Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.608889 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.609140 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.644258 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sj857" podStartSLOduration=2.408473662 podStartE2EDuration="43.644241955s" podCreationTimestamp="2026-03-10 18:51:48 +0000 UTC" firstStartedPulling="2026-03-10 18:51:49.777920995 +0000 UTC m=+253.541356955" lastFinishedPulling="2026-03-10 18:52:31.013689288 +0000 UTC m=+294.777125248" observedRunningTime="2026-03-10 18:52:31.413580122 +0000 UTC m=+295.177016092" watchObservedRunningTime="2026-03-10 18:52:31.644241955 +0000 UTC m=+295.407677915" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.645988 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-786f4b86b8-cxghd"] Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646240 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a39c36-1d01-4d26-a604-4c4739bf2e11" containerName="controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646257 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a39c36-1d01-4d26-a604-4c4739bf2e11" containerName="controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646267 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ae9808-899d-4113-8fed-20e33cbe49ec" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646274 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ae9808-899d-4113-8fed-20e33cbe49ec" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646282 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" containerName="route-controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646288 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" containerName="route-controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646299 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97fa138b-1e58-48b3-90ba-2ed750d3f4f1" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646307 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="97fa138b-1e58-48b3-90ba-2ed750d3f4f1" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646316 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646322 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: E0310 18:52:31.646331 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fdb-2327-42db-b334-4418927f0367" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646338 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fdb-2327-42db-b334-4418927f0367" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646448 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="97fa138b-1e58-48b3-90ba-2ed750d3f4f1" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646463 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ae9808-899d-4113-8fed-20e33cbe49ec" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646469 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a39c36-1d01-4d26-a604-4c4739bf2e11" containerName="controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646476 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" containerName="oc" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646485 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebd5ee5-c5b5-4b05-971f-a6511fe1f51c" containerName="route-controller-manager" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646494 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="41593fdb-2327-42db-b334-4418927f0367" containerName="pruner" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.646932 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.649340 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g"] Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.650033 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.651143 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.661644 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662025 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662063 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662081 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662261 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662558 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.662952 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.663161 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.663285 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.663591 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.663676 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.664863 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g"] Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.667360 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-786f4b86b8-cxghd"] Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.672294 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805415 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805584 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805847 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctq4d\" (UniqueName: \"kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805905 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnrd\" (UniqueName: \"kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805962 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.805995 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.806019 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.806106 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.907837 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.907966 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.907999 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908088 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908200 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctq4d\" (UniqueName: \"kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908225 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnrd\" (UniqueName: \"kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.908361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.909561 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.910430 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.910833 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.910831 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.911091 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.914643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.921257 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.927893 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnrd\" (UniqueName: \"kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd\") pod \"controller-manager-786f4b86b8-cxghd\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.932833 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctq4d\" (UniqueName: \"kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d\") pod \"route-controller-manager-7fc878f999-fl45g\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.965967 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:31 crc kubenswrapper[4861]: I0310 18:52:31.974866 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.241249 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-786f4b86b8-cxghd"] Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.268328 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g"] Mar 10 18:52:32 crc kubenswrapper[4861]: W0310 18:52:32.280491 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c655cb9_a476_4e87_998b_d7d8e1cd8061.slice/crio-6ad1303b3b4fb122f9c83f917baac4078a80dda70563706f25d7e534e2803cc9 WatchSource:0}: Error finding container 6ad1303b3b4fb122f9c83f917baac4078a80dda70563706f25d7e534e2803cc9: Status 404 returned error can't find the container with id 6ad1303b3b4fb122f9c83f917baac4078a80dda70563706f25d7e534e2803cc9 Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.416974 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" event={"ID":"0c655cb9-a476-4e87-998b-d7d8e1cd8061","Type":"ContainerStarted","Data":"6ad1303b3b4fb122f9c83f917baac4078a80dda70563706f25d7e534e2803cc9"} Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.421004 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" event={"ID":"d9381aeb-824f-4653-bb6a-09dd0c24c994","Type":"ContainerStarted","Data":"b4495d6686328a10f86bbea8640f45c6bd2dab3cebf4c82d2f39aa751c4d9d7c"} Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.477602 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:32 crc kubenswrapper[4861]: I0310 18:52:32.642545 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57xpw" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="registry-server" probeResult="failure" output=< Mar 10 18:52:32 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 18:52:32 crc kubenswrapper[4861]: > Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.427546 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" event={"ID":"0c655cb9-a476-4e87-998b-d7d8e1cd8061","Type":"ContainerStarted","Data":"f949f78dd884d304c660471cb6836e702038a744c5b26c84022b2db162999f76"} Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.427926 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.427941 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.429184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" event={"ID":"d9381aeb-824f-4653-bb6a-09dd0c24c994","Type":"ContainerStarted","Data":"92b259b59e3b028a0e4806dc81520a293854214bb4d3d62741d1ecf969c40cbf"} Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.433181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.470233 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" podStartSLOduration=7.470210182 podStartE2EDuration="7.470210182s" podCreationTimestamp="2026-03-10 18:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:33.464818994 +0000 UTC m=+297.228254974" watchObservedRunningTime="2026-03-10 18:52:33.470210182 +0000 UTC m=+297.233646142" Mar 10 18:52:33 crc kubenswrapper[4861]: I0310 18:52:33.491926 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" podStartSLOduration=7.491912388 podStartE2EDuration="7.491912388s" podCreationTimestamp="2026-03-10 18:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:52:33.488506598 +0000 UTC m=+297.251942568" watchObservedRunningTime="2026-03-10 18:52:33.491912388 +0000 UTC m=+297.255348348" Mar 10 18:52:34 crc kubenswrapper[4861]: I0310 18:52:34.438209 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rssg7" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="registry-server" containerID="cri-o://c52c3de48252a9f9444d7de9d4972d31c457e491df77d8561ee93fec0b51bb97" gracePeriod=2 Mar 10 18:52:34 crc kubenswrapper[4861]: I0310 18:52:34.438513 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:34 crc kubenswrapper[4861]: I0310 18:52:34.448473 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.490953 4861 generic.go:334] "Generic (PLEG): container finished" podID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerID="c52c3de48252a9f9444d7de9d4972d31c457e491df77d8561ee93fec0b51bb97" exitCode=0 Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.491030 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerDied","Data":"c52c3de48252a9f9444d7de9d4972d31c457e491df77d8561ee93fec0b51bb97"} Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.682294 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.778625 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7qn4\" (UniqueName: \"kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4\") pod \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.778773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities\") pod \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.778869 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content\") pod \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\" (UID: \"3db43f6e-38a4-4f5c-bb4b-ddac9e664528\") " Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.781177 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities" (OuterVolumeSpecName: "utilities") pod "3db43f6e-38a4-4f5c-bb4b-ddac9e664528" (UID: "3db43f6e-38a4-4f5c-bb4b-ddac9e664528"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.789153 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4" (OuterVolumeSpecName: "kube-api-access-m7qn4") pod "3db43f6e-38a4-4f5c-bb4b-ddac9e664528" (UID: "3db43f6e-38a4-4f5c-bb4b-ddac9e664528"). InnerVolumeSpecName "kube-api-access-m7qn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.806108 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3db43f6e-38a4-4f5c-bb4b-ddac9e664528" (UID: "3db43f6e-38a4-4f5c-bb4b-ddac9e664528"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.880738 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7qn4\" (UniqueName: \"kubernetes.io/projected/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-kube-api-access-m7qn4\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.880767 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:35 crc kubenswrapper[4861]: I0310 18:52:35.880777 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db43f6e-38a4-4f5c-bb4b-ddac9e664528-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.505101 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rssg7" event={"ID":"3db43f6e-38a4-4f5c-bb4b-ddac9e664528","Type":"ContainerDied","Data":"4be108834b542d01902ea058416fa8fb32956ce45b039a90fe06c2e4d4f414bb"} Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.505169 4861 scope.go:117] "RemoveContainer" containerID="c52c3de48252a9f9444d7de9d4972d31c457e491df77d8561ee93fec0b51bb97" Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.505199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rssg7" Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.559102 4861 scope.go:117] "RemoveContainer" containerID="21ac9af81e4fcbfe3ad2d08d16a0191b26f7711c1ef5e01fd0a732ee7c43415d" Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.566651 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.575335 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rssg7"] Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.591004 4861 scope.go:117] "RemoveContainer" containerID="a80f2825ea4a3fe8ab77fca76382e3a3afbc365126134d117941e8369df90264" Mar 10 18:52:36 crc kubenswrapper[4861]: I0310 18:52:36.986807 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" path="/var/lib/kubelet/pods/3db43f6e-38a4-4f5c-bb4b-ddac9e664528/volumes" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.038959 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.039386 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.095609 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.228325 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.231299 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.279994 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.439807 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.439870 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.492579 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.568653 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.580470 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.585355 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.647617 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.647793 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:38 crc kubenswrapper[4861]: I0310 18:52:38.703056 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:39 crc kubenswrapper[4861]: I0310 18:52:39.619997 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:40 crc kubenswrapper[4861]: I0310 18:52:40.077211 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:52:40 crc kubenswrapper[4861]: I0310 18:52:40.150433 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:52:40 crc kubenswrapper[4861]: I0310 18:52:40.428239 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:52:40 crc kubenswrapper[4861]: I0310 18:52:40.526427 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sj857" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="registry-server" containerID="cri-o://f0dd32b9f3ed5fc49372b1263d3a5fca69a7a585c211ca524ff49e4084930511" gracePeriod=2 Mar 10 18:52:40 crc kubenswrapper[4861]: I0310 18:52:40.526688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerStarted","Data":"03c2838958e0d91e1f377ee1e41caac9c02353f685a8053036d69f893f2615b2"} Mar 10 18:52:41 crc kubenswrapper[4861]: I0310 18:52:41.029176 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:52:41 crc kubenswrapper[4861]: I0310 18:52:41.534338 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerID="f0dd32b9f3ed5fc49372b1263d3a5fca69a7a585c211ca524ff49e4084930511" exitCode=0 Mar 10 18:52:41 crc kubenswrapper[4861]: I0310 18:52:41.534463 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerDied","Data":"f0dd32b9f3ed5fc49372b1263d3a5fca69a7a585c211ca524ff49e4084930511"} Mar 10 18:52:41 crc kubenswrapper[4861]: I0310 18:52:41.662793 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:41 crc kubenswrapper[4861]: I0310 18:52:41.710120 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.093622 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.180342 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjx5\" (UniqueName: \"kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5\") pod \"1a9210c4-9579-4cfc-bf99-b652c3af6915\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.180443 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content\") pod \"1a9210c4-9579-4cfc-bf99-b652c3af6915\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.180484 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities\") pod \"1a9210c4-9579-4cfc-bf99-b652c3af6915\" (UID: \"1a9210c4-9579-4cfc-bf99-b652c3af6915\") " Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.181901 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities" (OuterVolumeSpecName: "utilities") pod "1a9210c4-9579-4cfc-bf99-b652c3af6915" (UID: "1a9210c4-9579-4cfc-bf99-b652c3af6915"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.188756 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5" (OuterVolumeSpecName: "kube-api-access-nhjx5") pod "1a9210c4-9579-4cfc-bf99-b652c3af6915" (UID: "1a9210c4-9579-4cfc-bf99-b652c3af6915"). InnerVolumeSpecName "kube-api-access-nhjx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.225430 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9210c4-9579-4cfc-bf99-b652c3af6915" (UID: "1a9210c4-9579-4cfc-bf99-b652c3af6915"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.282135 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.282160 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjx5\" (UniqueName: \"kubernetes.io/projected/1a9210c4-9579-4cfc-bf99-b652c3af6915-kube-api-access-nhjx5\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.282183 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9210c4-9579-4cfc-bf99-b652c3af6915-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.540764 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj857" event={"ID":"1a9210c4-9579-4cfc-bf99-b652c3af6915","Type":"ContainerDied","Data":"d4be2944a4c4857e89ac0b3151108ec0a02cd92b6967800ddcfd60509cacf795"} Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.540809 4861 scope.go:117] "RemoveContainer" containerID="f0dd32b9f3ed5fc49372b1263d3a5fca69a7a585c211ca524ff49e4084930511" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.540908 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj857" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.546239 4861 generic.go:334] "Generic (PLEG): container finished" podID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerID="03c2838958e0d91e1f377ee1e41caac9c02353f685a8053036d69f893f2615b2" exitCode=0 Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.546293 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerDied","Data":"03c2838958e0d91e1f377ee1e41caac9c02353f685a8053036d69f893f2615b2"} Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.546806 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65ndj" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="registry-server" containerID="cri-o://adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7" gracePeriod=2 Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.570916 4861 scope.go:117] "RemoveContainer" containerID="399b3830d2782fff664b310af9d4f487603b6c4262aba19f4cccaf2f5c6af8ca" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.581819 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.585031 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sj857"] Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.597644 4861 scope.go:117] "RemoveContainer" containerID="0ec1cfa07a2ce0e20a3d2660639fa17daa0414890d653d1c9d2646db439f1440" Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.830634 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:52:42 crc kubenswrapper[4861]: I0310 18:52:42.965114 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" path="/var/lib/kubelet/pods/1a9210c4-9579-4cfc-bf99-b652c3af6915/volumes" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.095486 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.192532 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j67kt\" (UniqueName: \"kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt\") pod \"832146f2-ed86-4794-a676-13d3df8679ad\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.192618 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content\") pod \"832146f2-ed86-4794-a676-13d3df8679ad\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.192668 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities\") pod \"832146f2-ed86-4794-a676-13d3df8679ad\" (UID: \"832146f2-ed86-4794-a676-13d3df8679ad\") " Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.193618 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities" (OuterVolumeSpecName: "utilities") pod "832146f2-ed86-4794-a676-13d3df8679ad" (UID: "832146f2-ed86-4794-a676-13d3df8679ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.199618 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt" (OuterVolumeSpecName: "kube-api-access-j67kt") pod "832146f2-ed86-4794-a676-13d3df8679ad" (UID: "832146f2-ed86-4794-a676-13d3df8679ad"). InnerVolumeSpecName "kube-api-access-j67kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.258538 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "832146f2-ed86-4794-a676-13d3df8679ad" (UID: "832146f2-ed86-4794-a676-13d3df8679ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.294352 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j67kt\" (UniqueName: \"kubernetes.io/projected/832146f2-ed86-4794-a676-13d3df8679ad-kube-api-access-j67kt\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.294606 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.294614 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832146f2-ed86-4794-a676-13d3df8679ad-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.553574 4861 generic.go:334] "Generic (PLEG): container finished" podID="832146f2-ed86-4794-a676-13d3df8679ad" containerID="adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7" exitCode=0 Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.553646 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerDied","Data":"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7"} Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.553678 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65ndj" event={"ID":"832146f2-ed86-4794-a676-13d3df8679ad","Type":"ContainerDied","Data":"c6283e834fa11becf9da9709e31afeb7540709bc3aa376835ef17bafa6d6c762"} Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.553698 4861 scope.go:117] "RemoveContainer" containerID="adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.553816 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65ndj" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.569156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerStarted","Data":"627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b"} Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.569435 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57xpw" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="registry-server" containerID="cri-o://41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764" gracePeriod=2 Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.587371 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.587411 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65ndj"] Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.593577 4861 scope.go:117] "RemoveContainer" containerID="887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.608467 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2hgk" podStartSLOduration=8.374672114 podStartE2EDuration="53.608458221s" podCreationTimestamp="2026-03-10 18:51:50 +0000 UTC" firstStartedPulling="2026-03-10 18:51:57.742835899 +0000 UTC m=+261.506271859" lastFinishedPulling="2026-03-10 18:52:42.976622006 +0000 UTC m=+306.740057966" observedRunningTime="2026-03-10 18:52:43.608273946 +0000 UTC m=+307.371709926" watchObservedRunningTime="2026-03-10 18:52:43.608458221 +0000 UTC m=+307.371894181" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.614611 4861 scope.go:117] "RemoveContainer" containerID="736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.637484 4861 scope.go:117] "RemoveContainer" containerID="adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7" Mar 10 18:52:43 crc kubenswrapper[4861]: E0310 18:52:43.640785 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7\": container with ID starting with adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7 not found: ID does not exist" containerID="adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.640816 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7"} err="failed to get container status \"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7\": rpc error: code = NotFound desc = could not find container \"adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7\": container with ID starting with adfcea847d1d3b1d24924e3475b5896c38deb30ee4a5d291c5c286203fe560e7 not found: ID does not exist" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.640838 4861 scope.go:117] "RemoveContainer" containerID="887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23" Mar 10 18:52:43 crc kubenswrapper[4861]: E0310 18:52:43.641124 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23\": container with ID starting with 887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23 not found: ID does not exist" containerID="887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.641159 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23"} err="failed to get container status \"887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23\": rpc error: code = NotFound desc = could not find container \"887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23\": container with ID starting with 887e9286d5288336ace6a87743bb54f4b258b2d0a92ad37c2df4469fe6886f23 not found: ID does not exist" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.641184 4861 scope.go:117] "RemoveContainer" containerID="736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8" Mar 10 18:52:43 crc kubenswrapper[4861]: E0310 18:52:43.641596 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8\": container with ID starting with 736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8 not found: ID does not exist" containerID="736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.641620 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8"} err="failed to get container status \"736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8\": rpc error: code = NotFound desc = could not find container \"736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8\": container with ID starting with 736db02b83e159e02fca423b162d418b63d2312930ebcbb4562c8fda133b64e8 not found: ID does not exist" Mar 10 18:52:43 crc kubenswrapper[4861]: I0310 18:52:43.983634 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.111038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities\") pod \"dad24f7d-329a-449e-bd62-372dfa9f838b\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.111113 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content\") pod \"dad24f7d-329a-449e-bd62-372dfa9f838b\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.111208 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8z2w\" (UniqueName: \"kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w\") pod \"dad24f7d-329a-449e-bd62-372dfa9f838b\" (UID: \"dad24f7d-329a-449e-bd62-372dfa9f838b\") " Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.111845 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities" (OuterVolumeSpecName: "utilities") pod "dad24f7d-329a-449e-bd62-372dfa9f838b" (UID: "dad24f7d-329a-449e-bd62-372dfa9f838b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.116815 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w" (OuterVolumeSpecName: "kube-api-access-v8z2w") pod "dad24f7d-329a-449e-bd62-372dfa9f838b" (UID: "dad24f7d-329a-449e-bd62-372dfa9f838b"). InnerVolumeSpecName "kube-api-access-v8z2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.212220 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8z2w\" (UniqueName: \"kubernetes.io/projected/dad24f7d-329a-449e-bd62-372dfa9f838b-kube-api-access-v8z2w\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.212245 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.270240 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad24f7d-329a-449e-bd62-372dfa9f838b" (UID: "dad24f7d-329a-449e-bd62-372dfa9f838b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.317405 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad24f7d-329a-449e-bd62-372dfa9f838b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.577843 4861 generic.go:334] "Generic (PLEG): container finished" podID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerID="41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764" exitCode=0 Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.577909 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57xpw" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.577890 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerDied","Data":"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764"} Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.578025 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57xpw" event={"ID":"dad24f7d-329a-449e-bd62-372dfa9f838b","Type":"ContainerDied","Data":"2ed5f404853ff54cd172ef92b43177c20a7992c4aad3519023e4072bd21dcdcb"} Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.578043 4861 scope.go:117] "RemoveContainer" containerID="41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.600296 4861 scope.go:117] "RemoveContainer" containerID="01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.604560 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.607766 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57xpw"] Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.623933 4861 scope.go:117] "RemoveContainer" containerID="79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.654741 4861 scope.go:117] "RemoveContainer" containerID="41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764" Mar 10 18:52:44 crc kubenswrapper[4861]: E0310 18:52:44.657312 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764\": container with ID starting with 41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764 not found: ID does not exist" containerID="41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.657375 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764"} err="failed to get container status \"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764\": rpc error: code = NotFound desc = could not find container \"41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764\": container with ID starting with 41a45217708f75f1b5f82e54dfdaea77a70c5484d019205b42438a21c98ee764 not found: ID does not exist" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.657416 4861 scope.go:117] "RemoveContainer" containerID="01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f" Mar 10 18:52:44 crc kubenswrapper[4861]: E0310 18:52:44.657942 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f\": container with ID starting with 01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f not found: ID does not exist" containerID="01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.657990 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f"} err="failed to get container status \"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f\": rpc error: code = NotFound desc = could not find container \"01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f\": container with ID starting with 01c6ca795eaa1a0eda9d3799e5b1cea85e29974f7b7d343bcef23175adb5502f not found: ID does not exist" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.658018 4861 scope.go:117] "RemoveContainer" containerID="79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49" Mar 10 18:52:44 crc kubenswrapper[4861]: E0310 18:52:44.658307 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49\": container with ID starting with 79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49 not found: ID does not exist" containerID="79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.658341 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49"} err="failed to get container status \"79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49\": rpc error: code = NotFound desc = could not find container \"79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49\": container with ID starting with 79464397500f19c98b660079eff87202ff7bdd2a038c15d907b44b3bddcb9b49 not found: ID does not exist" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.966007 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832146f2-ed86-4794-a676-13d3df8679ad" path="/var/lib/kubelet/pods/832146f2-ed86-4794-a676-13d3df8679ad/volumes" Mar 10 18:52:44 crc kubenswrapper[4861]: I0310 18:52:44.966754 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" path="/var/lib/kubelet/pods/dad24f7d-329a-449e-bd62-372dfa9f838b/volumes" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.199741 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.200022 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.264158 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.662277 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.992539 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.992601 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.992645 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.993182 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 18:52:51 crc kubenswrapper[4861]: I0310 18:52:51.993237 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b" gracePeriod=600 Mar 10 18:52:52 crc kubenswrapper[4861]: I0310 18:52:52.615884 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b" exitCode=0 Mar 10 18:52:52 crc kubenswrapper[4861]: I0310 18:52:52.615987 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b"} Mar 10 18:52:53 crc kubenswrapper[4861]: I0310 18:52:53.624969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42"} Mar 10 18:53:00 crc kubenswrapper[4861]: I0310 18:53:00.095195 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-864wb"] Mar 10 18:53:05 crc kubenswrapper[4861]: I0310 18:53:05.925193 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:53:05 crc kubenswrapper[4861]: I0310 18:53:05.927171 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 10 18:53:05 crc kubenswrapper[4861]: I0310 18:53:05.937929 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.026936 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.027054 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.027187 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.029984 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.029997 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.039810 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.046549 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.055018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.055302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.183450 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.191220 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.196460 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.712911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ba013aa6faf56f1c3ea11b3319f298d5294490c8d7a038afe85f6b11fd3d3692"} Mar 10 18:53:06 crc kubenswrapper[4861]: W0310 18:53:06.760645 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-bc95d13a67f1305250473cad9cd8fe2ec0b7ea5d6a10166cdff5dd2821548bf1 WatchSource:0}: Error finding container bc95d13a67f1305250473cad9cd8fe2ec0b7ea5d6a10166cdff5dd2821548bf1: Status 404 returned error can't find the container with id bc95d13a67f1305250473cad9cd8fe2ec0b7ea5d6a10166cdff5dd2821548bf1 Mar 10 18:53:06 crc kubenswrapper[4861]: W0310 18:53:06.761084 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e2d650d32532015eb0c89c6b0bafca57785fbb304901370b97914c2238fe367e WatchSource:0}: Error finding container e2d650d32532015eb0c89c6b0bafca57785fbb304901370b97914c2238fe367e: Status 404 returned error can't find the container with id e2d650d32532015eb0c89c6b0bafca57785fbb304901370b97914c2238fe367e Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.904948 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-786f4b86b8-cxghd"] Mar 10 18:53:06 crc kubenswrapper[4861]: I0310 18:53:06.905364 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" containerName="controller-manager" containerID="cri-o://92b259b59e3b028a0e4806dc81520a293854214bb4d3d62741d1ecf969c40cbf" gracePeriod=30 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.009215 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g"] Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.009411 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" containerName="route-controller-manager" containerID="cri-o://f949f78dd884d304c660471cb6836e702038a744c5b26c84022b2db162999f76" gracePeriod=30 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.835300 4861 generic.go:334] "Generic (PLEG): container finished" podID="d9381aeb-824f-4653-bb6a-09dd0c24c994" containerID="92b259b59e3b028a0e4806dc81520a293854214bb4d3d62741d1ecf969c40cbf" exitCode=0 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.835359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" event={"ID":"d9381aeb-824f-4653-bb6a-09dd0c24c994","Type":"ContainerDied","Data":"92b259b59e3b028a0e4806dc81520a293854214bb4d3d62741d1ecf969c40cbf"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.838132 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8fa8b8e865915c27f613d36238ca211ad0bb69f305c50c8ed45bc6a44a8ea5a0"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.838157 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e2d650d32532015eb0c89c6b0bafca57785fbb304901370b97914c2238fe367e"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.838304 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.840134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"72a313fe6f012a54535261e2dfd3cf4ca7ca312ecbeecee31c6a4df5872997a8"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.841879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"87b44286913e82c634f52661dc16eaa748eac30aea82053b473a1471a5588be8"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.841911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc95d13a67f1305250473cad9cd8fe2ec0b7ea5d6a10166cdff5dd2821548bf1"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.844044 4861 generic.go:334] "Generic (PLEG): container finished" podID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" containerID="f949f78dd884d304c660471cb6836e702038a744c5b26c84022b2db162999f76" exitCode=0 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.844094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" event={"ID":"0c655cb9-a476-4e87-998b-d7d8e1cd8061","Type":"ContainerDied","Data":"f949f78dd884d304c660471cb6836e702038a744c5b26c84022b2db162999f76"} Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.900787 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.901218 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.901230 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.901243 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.901249 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.901258 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.901265 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.901274 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.901281 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.901289 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.901296 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902015 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902031 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902041 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902046 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902054 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902060 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="extract-utilities" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902069 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902076 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902086 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902093 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="extract-content" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902103 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902108 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.902117 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902122 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902217 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="832146f2-ed86-4794-a676-13d3df8679ad" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902226 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad24f7d-329a-449e-bd62-372dfa9f838b" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902233 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db43f6e-38a4-4f5c-bb4b-ddac9e664528" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902243 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9210c4-9579-4cfc-bf99-b652c3af6915" containerName="registry-server" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902622 4861 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902916 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902930 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df" gracePeriod=15 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.902869 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f" gracePeriod=15 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.903057 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52" gracePeriod=15 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.903063 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70" gracePeriod=15 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.903096 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51" gracePeriod=15 Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904011 4861 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904160 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904172 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904185 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904193 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904201 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904210 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904218 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904226 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904237 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904245 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904259 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904266 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904279 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904287 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904299 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904307 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904415 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904427 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904439 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904448 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904460 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904470 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904481 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904491 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904598 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904607 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: E0310 18:53:07.904620 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904628 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.904770 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 10 18:53:07 crc kubenswrapper[4861]: I0310 18:53:07.941761 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005333 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005354 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005393 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005424 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005453 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.005482 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106344 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106425 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106485 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106549 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106579 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106599 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106617 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106749 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106771 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.106550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.143952 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.144908 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.145389 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.145660 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.237615 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:08 crc kubenswrapper[4861]: W0310 18:53:08.259289 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1c602e9a79477f754aa174fa8ba18bdeab050222e4fd3efbf1ed1d06a6059027 WatchSource:0}: Error finding container 1c602e9a79477f754aa174fa8ba18bdeab050222e4fd3efbf1ed1d06a6059027: Status 404 returned error can't find the container with id 1c602e9a79477f754aa174fa8ba18bdeab050222e4fd3efbf1ed1d06a6059027 Mar 10 18:53:08 crc kubenswrapper[4861]: E0310 18:53:08.263356 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189b8f9e472d9cef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,LastTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.311645 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert\") pod \"d9381aeb-824f-4653-bb6a-09dd0c24c994\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312032 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles\") pod \"d9381aeb-824f-4653-bb6a-09dd0c24c994\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312083 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config\") pod \"d9381aeb-824f-4653-bb6a-09dd0c24c994\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312127 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca\") pod \"d9381aeb-824f-4653-bb6a-09dd0c24c994\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312188 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgnrd\" (UniqueName: \"kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd\") pod \"d9381aeb-824f-4653-bb6a-09dd0c24c994\" (UID: \"d9381aeb-824f-4653-bb6a-09dd0c24c994\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312686 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d9381aeb-824f-4653-bb6a-09dd0c24c994" (UID: "d9381aeb-824f-4653-bb6a-09dd0c24c994"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.312722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca" (OuterVolumeSpecName: "client-ca") pod "d9381aeb-824f-4653-bb6a-09dd0c24c994" (UID: "d9381aeb-824f-4653-bb6a-09dd0c24c994"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.313008 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config" (OuterVolumeSpecName: "config") pod "d9381aeb-824f-4653-bb6a-09dd0c24c994" (UID: "d9381aeb-824f-4653-bb6a-09dd0c24c994"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.315397 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d9381aeb-824f-4653-bb6a-09dd0c24c994" (UID: "d9381aeb-824f-4653-bb6a-09dd0c24c994"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.316860 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd" (OuterVolumeSpecName: "kube-api-access-cgnrd") pod "d9381aeb-824f-4653-bb6a-09dd0c24c994" (UID: "d9381aeb-824f-4653-bb6a-09dd0c24c994"). InnerVolumeSpecName "kube-api-access-cgnrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.341989 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.342358 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.342506 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.342673 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.342842 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.416240 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.416299 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.416320 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9381aeb-824f-4653-bb6a-09dd0c24c994-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.416340 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgnrd\" (UniqueName: \"kubernetes.io/projected/d9381aeb-824f-4653-bb6a-09dd0c24c994-kube-api-access-cgnrd\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.416361 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9381aeb-824f-4653-bb6a-09dd0c24c994-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.517183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca\") pod \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.517280 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert\") pod \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.517395 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctq4d\" (UniqueName: \"kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d\") pod \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.517449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config\") pod \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\" (UID: \"0c655cb9-a476-4e87-998b-d7d8e1cd8061\") " Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.519590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config" (OuterVolumeSpecName: "config") pod "0c655cb9-a476-4e87-998b-d7d8e1cd8061" (UID: "0c655cb9-a476-4e87-998b-d7d8e1cd8061"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.520481 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c655cb9-a476-4e87-998b-d7d8e1cd8061" (UID: "0c655cb9-a476-4e87-998b-d7d8e1cd8061"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.524552 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c655cb9-a476-4e87-998b-d7d8e1cd8061" (UID: "0c655cb9-a476-4e87-998b-d7d8e1cd8061"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.525644 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d" (OuterVolumeSpecName: "kube-api-access-ctq4d") pod "0c655cb9-a476-4e87-998b-d7d8e1cd8061" (UID: "0c655cb9-a476-4e87-998b-d7d8e1cd8061"). InnerVolumeSpecName "kube-api-access-ctq4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.620362 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.620409 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c655cb9-a476-4e87-998b-d7d8e1cd8061-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.620430 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctq4d\" (UniqueName: \"kubernetes.io/projected/0c655cb9-a476-4e87-998b-d7d8e1cd8061-kube-api-access-ctq4d\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.620452 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c655cb9-a476-4e87-998b-d7d8e1cd8061-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.854642 4861 generic.go:334] "Generic (PLEG): container finished" podID="bcc3a2e0-9115-453c-8af2-6001abd4012f" containerID="570ba78bbfb3c52b41763672697fc6adb6a476c30304659bb8719b05bcaf5522" exitCode=0 Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.854772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bcc3a2e0-9115-453c-8af2-6001abd4012f","Type":"ContainerDied","Data":"570ba78bbfb3c52b41763672697fc6adb6a476c30304659bb8719b05bcaf5522"} Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.855653 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.856062 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.856395 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.857003 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.857621 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.857731 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.857633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" event={"ID":"0c655cb9-a476-4e87-998b-d7d8e1cd8061","Type":"ContainerDied","Data":"6ad1303b3b4fb122f9c83f917baac4078a80dda70563706f25d7e534e2803cc9"} Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.857854 4861 scope.go:117] "RemoveContainer" containerID="f949f78dd884d304c660471cb6836e702038a744c5b26c84022b2db162999f76" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.858785 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.859305 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.859636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf"} Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.859690 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1c602e9a79477f754aa174fa8ba18bdeab050222e4fd3efbf1ed1d06a6059027"} Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.859705 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.860296 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.860803 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.862569 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.863069 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.863439 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.864115 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" event={"ID":"d9381aeb-824f-4653-bb6a-09dd0c24c994","Type":"ContainerDied","Data":"b4495d6686328a10f86bbea8640f45c6bd2dab3cebf4c82d2f39aa751c4d9d7c"} Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.864225 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.869087 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.869798 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.870375 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.871101 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.871622 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.872144 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.872659 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.873446 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.875625 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.876813 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52" exitCode=0 Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.876849 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df" exitCode=0 Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.876864 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70" exitCode=0 Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.876882 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51" exitCode=2 Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.881494 4861 scope.go:117] "RemoveContainer" containerID="92b259b59e3b028a0e4806dc81520a293854214bb4d3d62741d1ecf969c40cbf" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.894073 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.894556 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.894924 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.895366 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.896868 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.897338 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.897679 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.898093 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.898430 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.898806 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:08 crc kubenswrapper[4861]: I0310 18:53:08.903102 4861 scope.go:117] "RemoveContainer" containerID="1665ca49c2c451e187b70bfc13ce0034d2c07b92943b18e77ae09cd6e5505557" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.430746 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.431547 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.432075 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.432507 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.432986 4861 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:09 crc kubenswrapper[4861]: I0310 18:53:09.433048 4861 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.433737 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Mar 10 18:53:09 crc kubenswrapper[4861]: E0310 18:53:09.635108 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Mar 10 18:53:09 crc kubenswrapper[4861]: I0310 18:53:09.890944 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 10 18:53:10 crc kubenswrapper[4861]: E0310 18:53:10.038539 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.681526 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.683136 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.683277 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.683852 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.684321 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.685115 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.685564 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.686039 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.686554 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.687037 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.687485 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.687973 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.688325 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800613 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir\") pod \"bcc3a2e0-9115-453c-8af2-6001abd4012f\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800764 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock\") pod \"bcc3a2e0-9115-453c-8af2-6001abd4012f\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800749 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcc3a2e0-9115-453c-8af2-6001abd4012f" (UID: "bcc3a2e0-9115-453c-8af2-6001abd4012f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800794 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800832 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock" (OuterVolumeSpecName: "var-lock") pod "bcc3a2e0-9115-453c-8af2-6001abd4012f" (UID: "bcc3a2e0-9115-453c-8af2-6001abd4012f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800887 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access\") pod \"bcc3a2e0-9115-453c-8af2-6001abd4012f\" (UID: \"bcc3a2e0-9115-453c-8af2-6001abd4012f\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800943 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.800971 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801090 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801150 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801486 4861 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-var-lock\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801514 4861 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801533 4861 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801551 4861 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.801569 4861 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcc3a2e0-9115-453c-8af2-6001abd4012f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.808697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcc3a2e0-9115-453c-8af2-6001abd4012f" (UID: "bcc3a2e0-9115-453c-8af2-6001abd4012f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:53:10 crc kubenswrapper[4861]: E0310 18:53:10.840248 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Mar 10 18:53:10 crc kubenswrapper[4861]: E0310 18:53:10.865922 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189b8f9e472d9cef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,LastTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.902215 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.902370 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcc3a2e0-9115-453c-8af2-6001abd4012f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.903219 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f" exitCode=0 Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.903269 4861 scope.go:117] "RemoveContainer" containerID="b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.903362 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.905460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bcc3a2e0-9115-453c-8af2-6001abd4012f","Type":"ContainerDied","Data":"da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333"} Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.905488 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.905490 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3acba60fd052214de8d1d7c6e6b962a5a581dd5c7404afeae027ac4f320333" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.907489 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.907522 4861 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="87b44286913e82c634f52661dc16eaa748eac30aea82053b473a1471a5588be8" exitCode=255 Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.907541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"87b44286913e82c634f52661dc16eaa748eac30aea82053b473a1471a5588be8"} Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.907851 4861 scope.go:117] "RemoveContainer" containerID="87b44286913e82c634f52661dc16eaa748eac30aea82053b473a1471a5588be8" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.908201 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.908618 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.908808 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.908956 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.909102 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.909236 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.922569 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.923241 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.924103 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.924511 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.925362 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.925629 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.926681 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.927005 4861 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.927290 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.927546 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.927839 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.928122 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.936024 4861 scope.go:117] "RemoveContainer" containerID="44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.965366 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 10 18:53:10 crc kubenswrapper[4861]: I0310 18:53:10.983467 4861 scope.go:117] "RemoveContainer" containerID="9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.005588 4861 scope.go:117] "RemoveContainer" containerID="484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.028213 4861 scope.go:117] "RemoveContainer" containerID="52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.047150 4861 scope.go:117] "RemoveContainer" containerID="a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.068394 4861 scope.go:117] "RemoveContainer" containerID="b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.068921 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\": container with ID starting with b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52 not found: ID does not exist" containerID="b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.069002 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52"} err="failed to get container status \"b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\": rpc error: code = NotFound desc = could not find container \"b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52\": container with ID starting with b2dcc6ee4908d27fc63eb33149c6db9570b8524aab71a27fbb1e5c2fe4e97c52 not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.069067 4861 scope.go:117] "RemoveContainer" containerID="44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.069567 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\": container with ID starting with 44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df not found: ID does not exist" containerID="44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.069634 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df"} err="failed to get container status \"44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\": rpc error: code = NotFound desc = could not find container \"44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df\": container with ID starting with 44dde00a3ae562bbb5504d299475795cc38b22c2b6decba2ff15067bce7436df not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.069670 4861 scope.go:117] "RemoveContainer" containerID="9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.070148 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\": container with ID starting with 9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70 not found: ID does not exist" containerID="9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.070202 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70"} err="failed to get container status \"9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\": rpc error: code = NotFound desc = could not find container \"9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70\": container with ID starting with 9a8a6f58ea1d180f50a7ffde2b16f470901281a847bd85cc0bc8a62bbf9f8e70 not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.070228 4861 scope.go:117] "RemoveContainer" containerID="484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.070676 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\": container with ID starting with 484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51 not found: ID does not exist" containerID="484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.070750 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51"} err="failed to get container status \"484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\": rpc error: code = NotFound desc = could not find container \"484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51\": container with ID starting with 484df0ad2e71b2faec0ed53537512b115e6d4ab7cd3212cd29309538bd013c51 not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.070779 4861 scope.go:117] "RemoveContainer" containerID="52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.071205 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\": container with ID starting with 52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f not found: ID does not exist" containerID="52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.071258 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f"} err="failed to get container status \"52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\": rpc error: code = NotFound desc = could not find container \"52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f\": container with ID starting with 52e87225b434b0800764a5c2306d8079c44bff105d02de78ab085b434f56031f not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.071284 4861 scope.go:117] "RemoveContainer" containerID="a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc" Mar 10 18:53:11 crc kubenswrapper[4861]: E0310 18:53:11.071859 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\": container with ID starting with a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc not found: ID does not exist" containerID="a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.071910 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc"} err="failed to get container status \"a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\": rpc error: code = NotFound desc = could not find container \"a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc\": container with ID starting with a14137bdfec242e37af20a572af2edea25fb1d8a1f9708f8d0193d1a7675b1bc not found: ID does not exist" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.918094 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.918429 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000"} Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.919974 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.920516 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.921188 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.922445 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:11 crc kubenswrapper[4861]: I0310 18:53:11.923109 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:12 crc kubenswrapper[4861]: E0310 18:53:12.441299 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.929367 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.931092 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.931160 4861 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000" exitCode=255 Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.931200 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000"} Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.931247 4861 scope.go:117] "RemoveContainer" containerID="87b44286913e82c634f52661dc16eaa748eac30aea82053b473a1471a5588be8" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.932225 4861 scope.go:117] "RemoveContainer" containerID="8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.932434 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:12 crc kubenswrapper[4861]: E0310 18:53:12.932819 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.933225 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.936538 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.937199 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:12 crc kubenswrapper[4861]: I0310 18:53:12.937705 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:13 crc kubenswrapper[4861]: I0310 18:53:13.938153 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 10 18:53:15 crc kubenswrapper[4861]: E0310 18:53:15.642599 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="6.4s" Mar 10 18:53:16 crc kubenswrapper[4861]: I0310 18:53:16.960439 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:16 crc kubenswrapper[4861]: I0310 18:53:16.960890 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:16 crc kubenswrapper[4861]: I0310 18:53:16.961168 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:16 crc kubenswrapper[4861]: I0310 18:53:16.961368 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:16 crc kubenswrapper[4861]: I0310 18:53:16.961564 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:20 crc kubenswrapper[4861]: E0310 18:53:20.867505 4861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189b8f9e472d9cef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,LastTimestamp:2026-03-10 18:53:08.262198511 +0000 UTC m=+332.025634481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 10 18:53:20 crc kubenswrapper[4861]: I0310 18:53:20.999349 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.000605 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.000736 4861 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33" exitCode=1 Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.000786 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33"} Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.001474 4861 scope.go:117] "RemoveContainer" containerID="73d88019bcd40296d2d693dfb1ce3bacd2e94ca10a114a00c75392df04099b33" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.001616 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.001813 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.002024 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.002188 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.004613 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.008159 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.957821 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.959508 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.960431 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.961003 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.961905 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.962753 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.963504 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.979756 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.979798 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:21 crc kubenswrapper[4861]: E0310 18:53:21.980447 4861 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:21 crc kubenswrapper[4861]: I0310 18:53:21.982019 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:22 crc kubenswrapper[4861]: W0310 18:53:22.013760 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-943eb9c36b5332a0e97a949054a2f31bd68e4f47839f14214b45a012df3b94b3 WatchSource:0}: Error finding container 943eb9c36b5332a0e97a949054a2f31bd68e4f47839f14214b45a012df3b94b3: Status 404 returned error can't find the container with id 943eb9c36b5332a0e97a949054a2f31bd68e4f47839f14214b45a012df3b94b3 Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.016279 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.017667 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.017828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2bcbbe1c95d3323b87ac1e6a8918210b69756be7adff8086273603ff7999a263"} Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.019981 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.020483 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.021800 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.022414 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.023153 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: I0310 18:53:22.023590 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:22 crc kubenswrapper[4861]: E0310 18:53:22.044153 4861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="7s" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.027848 4861 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6360f78bf43963daad9fef5e81d9c3dc691785ce6b2d01773bb0840c58b0fdea" exitCode=0 Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.028016 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6360f78bf43963daad9fef5e81d9c3dc691785ce6b2d01773bb0840c58b0fdea"} Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.028603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"943eb9c36b5332a0e97a949054a2f31bd68e4f47839f14214b45a012df3b94b3"} Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.029145 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.029171 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.029679 4861 status_manager.go:851] "Failed to get status for pod" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:23 crc kubenswrapper[4861]: E0310 18:53:23.029761 4861 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.030346 4861 status_manager.go:851] "Failed to get status for pod" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.030933 4861 status_manager.go:851] "Failed to get status for pod" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" pod="openshift-controller-manager/controller-manager-786f4b86b8-cxghd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-786f4b86b8-cxghd\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.031397 4861 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.031937 4861 status_manager.go:851] "Failed to get status for pod" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" pod="openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-7fc878f999-fl45g\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:23 crc kubenswrapper[4861]: I0310 18:53:23.032360 4861 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.039642 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"010164dbded607023e55dd81e23041d4f6a4e8d82e86ff9cadc154cae1689f11"} Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.039682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df0dfc7ea2355463c3730349347cea2c9c2616ac576cf8bd8fd7c2034363b500"} Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.039691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb36da182eb54d8fb1f3b240b1ddf0b40705be88b695f6ae3147ce815b8a0fdb"} Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.776853 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.777168 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.777275 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 10 18:53:24 crc kubenswrapper[4861]: I0310 18:53:24.777414 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.054377 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"13f8f67da29eedf06d4a819834d23e51f72f917598d7fbc76d1d4a576ce161c1"} Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.054450 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd53ba2da3a4676f3b014d6454a9864d143c4fe0867d5407638a370489f7de1f"} Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.054879 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.054932 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.133288 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" containerName="oauth-openshift" containerID="cri-o://7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd" gracePeriod=15 Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.631225 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.732738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733132 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733215 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733259 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733299 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733196 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733341 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733398 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733435 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733474 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733510 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733543 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733578 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733618 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2zz2\" (UniqueName: \"kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733658 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca\") pod \"ccb02870-5d18-43c9-950d-042c52c092c3\" (UID: \"ccb02870-5d18-43c9-950d-042c52c092c3\") " Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.733934 4861 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccb02870-5d18-43c9-950d-042c52c092c3-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.734063 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.734358 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.734986 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.735444 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.741998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2" (OuterVolumeSpecName: "kube-api-access-k2zz2") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "kube-api-access-k2zz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.751280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.754148 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.754667 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.759268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.772999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.773567 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.787822 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.797203 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ccb02870-5d18-43c9-950d-042c52c092c3" (UID: "ccb02870-5d18-43c9-950d-042c52c092c3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835508 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835543 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835556 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835566 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835578 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835587 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835596 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835607 4861 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835616 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2zz2\" (UniqueName: \"kubernetes.io/projected/ccb02870-5d18-43c9-950d-042c52c092c3-kube-api-access-k2zz2\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835625 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835634 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835642 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:25 crc kubenswrapper[4861]: I0310 18:53:25.835653 4861 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ccb02870-5d18-43c9-950d-042c52c092c3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.061355 4861 generic.go:334] "Generic (PLEG): container finished" podID="ccb02870-5d18-43c9-950d-042c52c092c3" containerID="7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd" exitCode=0 Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.061420 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" event={"ID":"ccb02870-5d18-43c9-950d-042c52c092c3","Type":"ContainerDied","Data":"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd"} Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.061462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" event={"ID":"ccb02870-5d18-43c9-950d-042c52c092c3","Type":"ContainerDied","Data":"facbc50c32118d37900f8b4f740a799c217914f3552f90d4754de39215845681"} Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.061482 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-864wb" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.061491 4861 scope.go:117] "RemoveContainer" containerID="7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.081063 4861 scope.go:117] "RemoveContainer" containerID="7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd" Mar 10 18:53:26 crc kubenswrapper[4861]: E0310 18:53:26.081423 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd\": container with ID starting with 7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd not found: ID does not exist" containerID="7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.081506 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd"} err="failed to get container status \"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd\": rpc error: code = NotFound desc = could not find container \"7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd\": container with ID starting with 7227591a08630257cb77f81628472ee8015dfa6dd5d415976874aa9d3b2364bd not found: ID does not exist" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.982327 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.982788 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:26 crc kubenswrapper[4861]: I0310 18:53:26.990339 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:27 crc kubenswrapper[4861]: I0310 18:53:27.957992 4861 scope.go:117] "RemoveContainer" containerID="8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000" Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.080736 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.081327 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.081352 4861 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb" exitCode=255 Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.081378 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb"} Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.081407 4861 scope.go:117] "RemoveContainer" containerID="8b83a39eb573bc765416d78858b3d0c4a4775d9d12284cd87da5d351a4680000" Mar 10 18:53:29 crc kubenswrapper[4861]: I0310 18:53:29.081687 4861 scope.go:117] "RemoveContainer" containerID="58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb" Mar 10 18:53:29 crc kubenswrapper[4861]: E0310 18:53:29.081846 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.066206 4861 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.095327 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.095752 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.095772 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.096400 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.101163 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:30 crc kubenswrapper[4861]: I0310 18:53:30.224962 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b351827f-5d77-4c88-b452-39b2d8e358df" Mar 10 18:53:30 crc kubenswrapper[4861]: E0310 18:53:30.576664 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Mar 10 18:53:30 crc kubenswrapper[4861]: E0310 18:53:30.871079 4861 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Mar 10 18:53:31 crc kubenswrapper[4861]: I0310 18:53:31.103154 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:31 crc kubenswrapper[4861]: I0310 18:53:31.103198 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:31 crc kubenswrapper[4861]: I0310 18:53:31.121548 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b351827f-5d77-4c88-b452-39b2d8e358df" Mar 10 18:53:32 crc kubenswrapper[4861]: I0310 18:53:32.110783 4861 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:32 crc kubenswrapper[4861]: I0310 18:53:32.111206 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c65aa66-5db2-421b-ad46-0ff1e2b1cb22" Mar 10 18:53:32 crc kubenswrapper[4861]: I0310 18:53:32.115888 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b351827f-5d77-4c88-b452-39b2d8e358df" Mar 10 18:53:34 crc kubenswrapper[4861]: I0310 18:53:34.776702 4861 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 10 18:53:34 crc kubenswrapper[4861]: I0310 18:53:34.776841 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 10 18:53:36 crc kubenswrapper[4861]: I0310 18:53:36.204909 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 10 18:53:39 crc kubenswrapper[4861]: I0310 18:53:39.263173 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 10 18:53:39 crc kubenswrapper[4861]: I0310 18:53:39.440243 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 10 18:53:39 crc kubenswrapper[4861]: I0310 18:53:39.958538 4861 scope.go:117] "RemoveContainer" containerID="58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb" Mar 10 18:53:39 crc kubenswrapper[4861]: E0310 18:53:39.958930 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:53:40 crc kubenswrapper[4861]: I0310 18:53:40.179231 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 10 18:53:40 crc kubenswrapper[4861]: I0310 18:53:40.414001 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 10 18:53:40 crc kubenswrapper[4861]: I0310 18:53:40.759734 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 10 18:53:40 crc kubenswrapper[4861]: I0310 18:53:40.932784 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.051492 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.065064 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.072470 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.118031 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.178192 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.292426 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.530861 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.576165 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.786020 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.820328 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.856018 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 10 18:53:41 crc kubenswrapper[4861]: I0310 18:53:41.897510 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.258683 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.305919 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.317911 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.411350 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.519024 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.568095 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.617596 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.637591 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.644028 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.911303 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 10 18:53:42 crc kubenswrapper[4861]: I0310 18:53:42.947851 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.010731 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.126250 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.152226 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.238508 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.359211 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.490430 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.534917 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.552705 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.621483 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.647593 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.821670 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.834504 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.855575 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.897964 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.904347 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.962559 4861 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.967046 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.967026741 podStartE2EDuration="36.967026741s" podCreationTimestamp="2026-03-10 18:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:53:30.149937902 +0000 UTC m=+353.913373863" watchObservedRunningTime="2026-03-10 18:53:43.967026741 +0000 UTC m=+367.730462701" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.967512 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-786f4b86b8-cxghd","openshift-authentication/oauth-openshift-558db77b4-864wb","openshift-route-controller-manager/route-controller-manager-7fc878f999-fl45g"] Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.967573 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.972625 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 10 18:53:43 crc kubenswrapper[4861]: I0310 18:53:43.996290 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.996262355 podStartE2EDuration="13.996262355s" podCreationTimestamp="2026-03-10 18:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:53:43.990011731 +0000 UTC m=+367.753447731" watchObservedRunningTime="2026-03-10 18:53:43.996262355 +0000 UTC m=+367.759698335" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.074361 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.116633 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.120067 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.241388 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.284526 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.480249 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.515441 4861 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.549662 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.565609 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.579368 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.581615 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.590389 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.592218 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.635522 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.690816 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.705092 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.783321 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.791418 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.806595 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.889752 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.911347 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.930913 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.953103 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.966395 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" path="/var/lib/kubelet/pods/0c655cb9-a476-4e87-998b-d7d8e1cd8061/volumes" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.967790 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" path="/var/lib/kubelet/pods/ccb02870-5d18-43c9-950d-042c52c092c3/volumes" Mar 10 18:53:44 crc kubenswrapper[4861]: I0310 18:53:44.969148 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" path="/var/lib/kubelet/pods/d9381aeb-824f-4653-bb6a-09dd0c24c994/volumes" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.052801 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.208068 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.474394 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.498015 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.500516 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.527276 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.535132 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.593894 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.882416 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.885470 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 10 18:53:45 crc kubenswrapper[4861]: I0310 18:53:45.980922 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.029220 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.029686 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.044140 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.138904 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.159828 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.172046 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.221282 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.272918 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.343785 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.477753 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.504318 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.505603 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.575375 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.576295 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.577053 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.619388 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.642798 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.677098 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.782797 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.805513 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.884985 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 10 18:53:46 crc kubenswrapper[4861]: I0310 18:53:46.949640 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.083505 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.119213 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.131627 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.140044 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.175577 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.442038 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.488539 4861 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.539399 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.659191 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.728845 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.770631 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.792246 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.888037 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.951920 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.959262 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 10 18:53:47 crc kubenswrapper[4861]: I0310 18:53:47.961882 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.015506 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.042510 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.072295 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.107962 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.175689 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.287105 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.316918 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.338755 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.415294 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.481273 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.579839 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.616890 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.686826 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.769381 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.848543 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 10 18:53:48 crc kubenswrapper[4861]: I0310 18:53:48.939528 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.005091 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.035329 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.109597 4861 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.183139 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.195681 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.268172 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.313305 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.490199 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.500512 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.555643 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.587271 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.596203 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.616616 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.682842 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.892243 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 10 18:53:49 crc kubenswrapper[4861]: I0310 18:53:49.912634 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.065664 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.090264 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.100813 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.130671 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.173741 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.189149 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.229092 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.261816 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.524674 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.655996 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.695905 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.711430 4861 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.771577 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.808663 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.902466 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 10 18:53:50 crc kubenswrapper[4861]: I0310 18:53:50.977507 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.093270 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.248474 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.377060 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.377388 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.395382 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.485336 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.491511 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.555848 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.656643 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.836603 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.845220 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.900536 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.915485 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.958960 4861 scope.go:117] "RemoveContainer" containerID="58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.966191 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 10 18:53:51 crc kubenswrapper[4861]: I0310 18:53:51.967744 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.241176 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.241470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0"} Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.241692 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.254669 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.280301 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.306825 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.314924 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.394286 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.422085 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.459150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.484293 4861 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.484555 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf" gracePeriod=5 Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.503308 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.567323 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.633862 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.673530 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.713030 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.783059 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.876876 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.961491 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 10 18:53:52 crc kubenswrapper[4861]: I0310 18:53:52.967911 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.107950 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.183859 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.254665 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.281236 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.344491 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.510604 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.631324 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.646219 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.754857 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.774297 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.866124 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 10 18:53:53 crc kubenswrapper[4861]: I0310 18:53:53.933482 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.037930 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.234344 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.259211 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.260673 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.260957 4861 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0" exitCode=255 Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.261007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0"} Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.261347 4861 scope.go:117] "RemoveContainer" containerID="58f6245530cc81281a9bbfe29faf7252a4681e72eb2f0ec7dfa798b9e2c11acb" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.262235 4861 scope.go:117] "RemoveContainer" containerID="c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0" Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.262831 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.305184 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.450991 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.462285 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.528326 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.561741 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.871644 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.951485 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.952187 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" containerName="route-controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952215 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" containerName="route-controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.952234 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" containerName="oauth-openshift" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952247 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" containerName="oauth-openshift" Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.952266 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952283 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.952311 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" containerName="controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952323 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" containerName="controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: E0310 18:53:54.952349 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" containerName="installer" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952360 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" containerName="installer" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952521 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952551 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc3a2e0-9115-453c-8af2-6001abd4012f" containerName="installer" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952574 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9381aeb-824f-4653-bb6a-09dd0c24c994" containerName="controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952616 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb02870-5d18-43c9-950d-042c52c092c3" containerName="oauth-openshift" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.952638 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c655cb9-a476-4e87-998b-d7d8e1cd8061" containerName="route-controller-manager" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.953293 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.957924 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.957958 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.958968 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.959313 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.961988 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.966893 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.973799 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.975087 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66ff447955-t28nr"] Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.975359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.977175 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.987519 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.987859 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.991340 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.991726 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.991815 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.991750 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.991973 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.992055 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.992240 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.992383 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.994139 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.994983 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995196 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995339 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995175 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995599 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.995919 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 10 18:53:54 crc kubenswrapper[4861]: I0310 18:53:54.996407 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.004304 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.027229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.030654 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66ff447955-t28nr"] Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.032459 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.038046 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.049131 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.144649 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.144772 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfh72\" (UniqueName: \"kubernetes.io/projected/ca661027-e6db-4955-a488-dcde879b4a12-kube-api-access-cfh72\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.144824 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxw7\" (UniqueName: \"kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.144945 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145029 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwmz\" (UniqueName: \"kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145060 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145166 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-audit-policies\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145294 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145383 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-router-certs\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145420 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-error\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145449 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca661027-e6db-4955-a488-dcde879b4a12-audit-dir\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145487 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145593 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-login\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145663 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-session\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145731 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145793 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.145879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-service-ca\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.231260 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.247607 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.247932 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.248117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.248313 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.248486 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-service-ca\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.248651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.248886 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfh72\" (UniqueName: \"kubernetes.io/projected/ca661027-e6db-4955-a488-dcde879b4a12-kube-api-access-cfh72\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249065 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxw7\" (UniqueName: \"kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249240 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249413 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwmz\" (UniqueName: \"kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-audit-policies\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250178 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250346 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250513 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-router-certs\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-error\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca661027-e6db-4955-a488-dcde879b4a12-audit-dir\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251056 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251231 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251380 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-login\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251532 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-session\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.250900 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca661027-e6db-4955-a488-dcde879b4a12-audit-dir\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-service-ca\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251703 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.249514 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251285 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251291 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca661027-e6db-4955-a488-dcde879b4a12-audit-policies\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.251983 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.252194 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.257974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.261352 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.270346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.275024 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.280539 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.281676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-router-certs\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.285374 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.292798 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwmz\" (UniqueName: \"kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz\") pod \"controller-manager-654b4b78d5-t2k5b\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.293216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-error\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.296340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-session\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.296541 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-user-template-login\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.297354 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxw7\" (UniqueName: \"kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7\") pod \"route-controller-manager-7d4d7545ff-q776g\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.299378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca661027-e6db-4955-a488-dcde879b4a12-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.309453 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfh72\" (UniqueName: \"kubernetes.io/projected/ca661027-e6db-4955-a488-dcde879b4a12-kube-api-access-cfh72\") pod \"oauth-openshift-66ff447955-t28nr\" (UID: \"ca661027-e6db-4955-a488-dcde879b4a12\") " pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.317936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.335168 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.359541 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.505070 4861 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.566039 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66ff447955-t28nr"] Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.587393 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.608352 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.622082 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 10 18:53:55 crc kubenswrapper[4861]: W0310 18:53:55.641924 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04794175_2367_4623_a34b_eca21758f961.slice/crio-bf64aff7d3af54c03686f29df1b8a5388fb85bbe4f5e4cc1ee4dd9a5ca3825ca WatchSource:0}: Error finding container bf64aff7d3af54c03686f29df1b8a5388fb85bbe4f5e4cc1ee4dd9a5ca3825ca: Status 404 returned error can't find the container with id bf64aff7d3af54c03686f29df1b8a5388fb85bbe4f5e4cc1ee4dd9a5ca3825ca Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.762534 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.833562 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:53:55 crc kubenswrapper[4861]: W0310 18:53:55.862635 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5fe39f_f430_4aae_abe9_c6dbeca41dc9.slice/crio-9bf9ac03cf2550e50aae8910a3e603c98df8d0f32bf2740c2244b2509b1eb216 WatchSource:0}: Error finding container 9bf9ac03cf2550e50aae8910a3e603c98df8d0f32bf2740c2244b2509b1eb216: Status 404 returned error can't find the container with id 9bf9ac03cf2550e50aae8910a3e603c98df8d0f32bf2740c2244b2509b1eb216 Mar 10 18:53:55 crc kubenswrapper[4861]: I0310 18:53:55.984460 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.281314 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" event={"ID":"04794175-2367-4623-a34b-eca21758f961","Type":"ContainerStarted","Data":"d23fd3f34ba4f479e8fc5948f841cefd96174a560643da925fb390253cd2ea62"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.281356 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" event={"ID":"04794175-2367-4623-a34b-eca21758f961","Type":"ContainerStarted","Data":"bf64aff7d3af54c03686f29df1b8a5388fb85bbe4f5e4cc1ee4dd9a5ca3825ca"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.281544 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.282653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" event={"ID":"ca661027-e6db-4955-a488-dcde879b4a12","Type":"ContainerStarted","Data":"ba808af81771255f7816b4b4e52b48931398fb0e3bf9f807fbc477fb950e0e52"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.282690 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" event={"ID":"ca661027-e6db-4955-a488-dcde879b4a12","Type":"ContainerStarted","Data":"4b3bd2ac5260a6f6a6f78afb8651469438643d806b4849fe392325c3bf0a467b"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.282822 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.284816 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" event={"ID":"da5fe39f-f430-4aae-abe9-c6dbeca41dc9","Type":"ContainerStarted","Data":"fd0dd144aa8e142cb0b990d917731aed5192237000fcb1298e7e03fc4af5fcb9"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.284844 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" event={"ID":"da5fe39f-f430-4aae-abe9-c6dbeca41dc9","Type":"ContainerStarted","Data":"9bf9ac03cf2550e50aae8910a3e603c98df8d0f32bf2740c2244b2509b1eb216"} Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.285020 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.303671 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.357997 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" podStartSLOduration=49.357978188 podStartE2EDuration="49.357978188s" podCreationTimestamp="2026-03-10 18:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:53:56.318917499 +0000 UTC m=+380.082353479" watchObservedRunningTime="2026-03-10 18:53:56.357978188 +0000 UTC m=+380.121414148" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.372106 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.434394 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" podStartSLOduration=56.434377778 podStartE2EDuration="56.434377778s" podCreationTimestamp="2026-03-10 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:53:56.433130523 +0000 UTC m=+380.196566493" watchObservedRunningTime="2026-03-10 18:53:56.434377778 +0000 UTC m=+380.197813738" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.454062 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" podStartSLOduration=50.454047686 podStartE2EDuration="50.454047686s" podCreationTimestamp="2026-03-10 18:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:53:56.450022504 +0000 UTC m=+380.213458464" watchObservedRunningTime="2026-03-10 18:53:56.454047686 +0000 UTC m=+380.217483646" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.461251 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 10 18:53:56 crc kubenswrapper[4861]: I0310 18:53:56.556972 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66ff447955-t28nr" Mar 10 18:53:57 crc kubenswrapper[4861]: I0310 18:53:57.281384 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.086092 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.086193 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195192 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195526 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195679 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195803 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195919 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195313 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.195824 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.196052 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.196134 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.209953 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.296890 4861 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.296933 4861 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.296951 4861 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.296970 4861 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.296985 4861 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.297447 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.297499 4861 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf" exitCode=137 Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.297893 4861 scope.go:117] "RemoveContainer" containerID="fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.297978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.320412 4861 scope.go:117] "RemoveContainer" containerID="fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf" Mar 10 18:53:58 crc kubenswrapper[4861]: E0310 18:53:58.320878 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf\": container with ID starting with fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf not found: ID does not exist" containerID="fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.320924 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf"} err="failed to get container status \"fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf\": rpc error: code = NotFound desc = could not find container \"fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf\": container with ID starting with fcf03bd37092f8d6b82dc0a9555d3f0727a5c34136ec8cc4a4dfe81f4e42c8cf not found: ID does not exist" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.970377 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.971153 4861 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.985836 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.986188 4861 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d5a7dd36-bb0f-4cf9-8a16-ff74f3e27988" Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.992986 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 10 18:53:58 crc kubenswrapper[4861]: I0310 18:53:58.993051 4861 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d5a7dd36-bb0f-4cf9-8a16-ff74f3e27988" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.163110 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552814-kg8sv"] Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.164603 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.166121 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.167059 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.167215 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.180639 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552814-kg8sv"] Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.227145 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5gsh\" (UniqueName: \"kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh\") pod \"auto-csr-approver-29552814-kg8sv\" (UID: \"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81\") " pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.329215 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5gsh\" (UniqueName: \"kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh\") pod \"auto-csr-approver-29552814-kg8sv\" (UID: \"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81\") " pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.348640 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5gsh\" (UniqueName: \"kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh\") pod \"auto-csr-approver-29552814-kg8sv\" (UID: \"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81\") " pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.494571 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:00 crc kubenswrapper[4861]: I0310 18:54:00.931933 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552814-kg8sv"] Mar 10 18:54:01 crc kubenswrapper[4861]: I0310 18:54:01.317126 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" event={"ID":"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81","Type":"ContainerStarted","Data":"dacfc51035b37a03c9b0df2ba374740ea35d28df2040850211c58313d009fd1b"} Mar 10 18:54:03 crc kubenswrapper[4861]: I0310 18:54:03.335575 4861 generic.go:334] "Generic (PLEG): container finished" podID="ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" containerID="4b7158fcfc5c45c4f40f351f272d87ae2c4be5097fcbd6a40488f48ed302d371" exitCode=0 Mar 10 18:54:03 crc kubenswrapper[4861]: I0310 18:54:03.335701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" event={"ID":"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81","Type":"ContainerDied","Data":"4b7158fcfc5c45c4f40f351f272d87ae2c4be5097fcbd6a40488f48ed302d371"} Mar 10 18:54:04 crc kubenswrapper[4861]: I0310 18:54:04.759274 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:04 crc kubenswrapper[4861]: I0310 18:54:04.796408 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5gsh\" (UniqueName: \"kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh\") pod \"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81\" (UID: \"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81\") " Mar 10 18:54:04 crc kubenswrapper[4861]: I0310 18:54:04.813884 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh" (OuterVolumeSpecName: "kube-api-access-c5gsh") pod "ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" (UID: "ad870600-f1d1-4cf9-8c5f-e8389dbb5a81"). InnerVolumeSpecName "kube-api-access-c5gsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:54:04 crc kubenswrapper[4861]: I0310 18:54:04.898331 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5gsh\" (UniqueName: \"kubernetes.io/projected/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81-kube-api-access-c5gsh\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:05 crc kubenswrapper[4861]: I0310 18:54:05.352227 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" event={"ID":"ad870600-f1d1-4cf9-8c5f-e8389dbb5a81","Type":"ContainerDied","Data":"dacfc51035b37a03c9b0df2ba374740ea35d28df2040850211c58313d009fd1b"} Mar 10 18:54:05 crc kubenswrapper[4861]: I0310 18:54:05.352312 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacfc51035b37a03c9b0df2ba374740ea35d28df2040850211c58313d009fd1b" Mar 10 18:54:05 crc kubenswrapper[4861]: I0310 18:54:05.352435 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552814-kg8sv" Mar 10 18:54:06 crc kubenswrapper[4861]: I0310 18:54:06.858630 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:54:06 crc kubenswrapper[4861]: I0310 18:54:06.859043 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" podUID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" containerName="controller-manager" containerID="cri-o://fd0dd144aa8e142cb0b990d917731aed5192237000fcb1298e7e03fc4af5fcb9" gracePeriod=30 Mar 10 18:54:06 crc kubenswrapper[4861]: I0310 18:54:06.945325 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:54:06 crc kubenswrapper[4861]: I0310 18:54:06.945627 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" podUID="04794175-2367-4623-a34b-eca21758f961" containerName="route-controller-manager" containerID="cri-o://d23fd3f34ba4f479e8fc5948f841cefd96174a560643da925fb390253cd2ea62" gracePeriod=30 Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.367405 4861 generic.go:334] "Generic (PLEG): container finished" podID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" containerID="fd0dd144aa8e142cb0b990d917731aed5192237000fcb1298e7e03fc4af5fcb9" exitCode=0 Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.367543 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" event={"ID":"da5fe39f-f430-4aae-abe9-c6dbeca41dc9","Type":"ContainerDied","Data":"fd0dd144aa8e142cb0b990d917731aed5192237000fcb1298e7e03fc4af5fcb9"} Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.370970 4861 generic.go:334] "Generic (PLEG): container finished" podID="04794175-2367-4623-a34b-eca21758f961" containerID="d23fd3f34ba4f479e8fc5948f841cefd96174a560643da925fb390253cd2ea62" exitCode=0 Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.371014 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" event={"ID":"04794175-2367-4623-a34b-eca21758f961","Type":"ContainerDied","Data":"d23fd3f34ba4f479e8fc5948f841cefd96174a560643da925fb390253cd2ea62"} Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.457041 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.464556 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537634 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwmz\" (UniqueName: \"kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz\") pod \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537695 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles\") pod \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537744 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert\") pod \"04794175-2367-4623-a34b-eca21758f961\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537794 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca\") pod \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537816 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert\") pod \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config\") pod \"04794175-2367-4623-a34b-eca21758f961\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537876 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config\") pod \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\" (UID: \"da5fe39f-f430-4aae-abe9-c6dbeca41dc9\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537908 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxw7\" (UniqueName: \"kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7\") pod \"04794175-2367-4623-a34b-eca21758f961\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.537937 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca\") pod \"04794175-2367-4623-a34b-eca21758f961\" (UID: \"04794175-2367-4623-a34b-eca21758f961\") " Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.538991 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca" (OuterVolumeSpecName: "client-ca") pod "04794175-2367-4623-a34b-eca21758f961" (UID: "04794175-2367-4623-a34b-eca21758f961"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.539055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "da5fe39f-f430-4aae-abe9-c6dbeca41dc9" (UID: "da5fe39f-f430-4aae-abe9-c6dbeca41dc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.539168 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config" (OuterVolumeSpecName: "config") pod "04794175-2367-4623-a34b-eca21758f961" (UID: "04794175-2367-4623-a34b-eca21758f961"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.539294 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config" (OuterVolumeSpecName: "config") pod "da5fe39f-f430-4aae-abe9-c6dbeca41dc9" (UID: "da5fe39f-f430-4aae-abe9-c6dbeca41dc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.540271 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da5fe39f-f430-4aae-abe9-c6dbeca41dc9" (UID: "da5fe39f-f430-4aae-abe9-c6dbeca41dc9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.543544 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04794175-2367-4623-a34b-eca21758f961" (UID: "04794175-2367-4623-a34b-eca21758f961"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.544406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da5fe39f-f430-4aae-abe9-c6dbeca41dc9" (UID: "da5fe39f-f430-4aae-abe9-c6dbeca41dc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.544655 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7" (OuterVolumeSpecName: "kube-api-access-qrxw7") pod "04794175-2367-4623-a34b-eca21758f961" (UID: "04794175-2367-4623-a34b-eca21758f961"). InnerVolumeSpecName "kube-api-access-qrxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.544978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz" (OuterVolumeSpecName: "kube-api-access-8pwmz") pod "da5fe39f-f430-4aae-abe9-c6dbeca41dc9" (UID: "da5fe39f-f430-4aae-abe9-c6dbeca41dc9"). InnerVolumeSpecName "kube-api-access-8pwmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639739 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxw7\" (UniqueName: \"kubernetes.io/projected/04794175-2367-4623-a34b-eca21758f961-kube-api-access-qrxw7\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639842 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639874 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwmz\" (UniqueName: \"kubernetes.io/projected/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-kube-api-access-8pwmz\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639897 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639915 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04794175-2367-4623-a34b-eca21758f961-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639933 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639951 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639969 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04794175-2367-4623-a34b-eca21758f961-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:07 crc kubenswrapper[4861]: I0310 18:54:07.639987 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da5fe39f-f430-4aae-abe9-c6dbeca41dc9-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.378918 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.380769 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-654b4b78d5-t2k5b" event={"ID":"da5fe39f-f430-4aae-abe9-c6dbeca41dc9","Type":"ContainerDied","Data":"9bf9ac03cf2550e50aae8910a3e603c98df8d0f32bf2740c2244b2509b1eb216"} Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.380891 4861 scope.go:117] "RemoveContainer" containerID="fd0dd144aa8e142cb0b990d917731aed5192237000fcb1298e7e03fc4af5fcb9" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.387960 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" event={"ID":"04794175-2367-4623-a34b-eca21758f961","Type":"ContainerDied","Data":"bf64aff7d3af54c03686f29df1b8a5388fb85bbe4f5e4cc1ee4dd9a5ca3825ca"} Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.388543 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.418053 4861 scope.go:117] "RemoveContainer" containerID="d23fd3f34ba4f479e8fc5948f841cefd96174a560643da925fb390253cd2ea62" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.426760 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.436402 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-654b4b78d5-t2k5b"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.438990 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.442384 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d4d7545ff-q776g"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861275 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:08 crc kubenswrapper[4861]: E0310 18:54:08.861595 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" containerName="controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861615 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" containerName="controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: E0310 18:54:08.861637 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" containerName="oc" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861649 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" containerName="oc" Mar 10 18:54:08 crc kubenswrapper[4861]: E0310 18:54:08.861665 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04794175-2367-4623-a34b-eca21758f961" containerName="route-controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861679 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04794175-2367-4623-a34b-eca21758f961" containerName="route-controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861896 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04794175-2367-4623-a34b-eca21758f961" containerName="route-controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861927 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" containerName="controller-manager" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.861946 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" containerName="oc" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.862495 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.871326 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.872461 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.873869 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.884852 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.886839 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.887188 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.887412 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.887770 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.892424 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.893237 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.893300 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.893240 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.893408 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.893493 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.895779 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.902451 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.910250 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956468 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvjl\" (UniqueName: \"kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956590 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956668 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpk5\" (UniqueName: \"kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956843 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.956948 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.957026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.957097 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.957505 4861 scope.go:117] "RemoveContainer" containerID="c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0" Mar 10 18:54:08 crc kubenswrapper[4861]: E0310 18:54:08.957748 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.967887 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04794175-2367-4623-a34b-eca21758f961" path="/var/lib/kubelet/pods/04794175-2367-4623-a34b-eca21758f961/volumes" Mar 10 18:54:08 crc kubenswrapper[4861]: I0310 18:54:08.968605 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5fe39f-f430-4aae-abe9-c6dbeca41dc9" path="/var/lib/kubelet/pods/da5fe39f-f430-4aae-abe9-c6dbeca41dc9/volumes" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.058403 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.058637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpk5\" (UniqueName: \"kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.058803 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.058937 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.059152 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.059253 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.059363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.060411 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.060557 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvjl\" (UniqueName: \"kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.060450 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.060840 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.060104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.061643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.063896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.064462 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.077665 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpk5\" (UniqueName: \"kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.081190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvjl\" (UniqueName: \"kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl\") pod \"controller-manager-66d744d49f-dkwxt\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.091230 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-lpbss\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.210345 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.227036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.633551 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:09 crc kubenswrapper[4861]: W0310 18:54:09.640365 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc9e4d6_867f_4315_b1d2_0b186eaae912.slice/crio-67253fc7deb708354f3a5974ec459dacb7faae233e90ea1591e41f47affa06ad WatchSource:0}: Error finding container 67253fc7deb708354f3a5974ec459dacb7faae233e90ea1591e41f47affa06ad: Status 404 returned error can't find the container with id 67253fc7deb708354f3a5974ec459dacb7faae233e90ea1591e41f47affa06ad Mar 10 18:54:09 crc kubenswrapper[4861]: I0310 18:54:09.697928 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:09 crc kubenswrapper[4861]: W0310 18:54:09.708386 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f583a07_39e7_4f6f_8fa1_5a4bd9f5bba8.slice/crio-90ddfee52151fb7ded6578af360f8c1959f11eff4fdc1c9ab4fe12adef37c9c6 WatchSource:0}: Error finding container 90ddfee52151fb7ded6578af360f8c1959f11eff4fdc1c9ab4fe12adef37c9c6: Status 404 returned error can't find the container with id 90ddfee52151fb7ded6578af360f8c1959f11eff4fdc1c9ab4fe12adef37c9c6 Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.401423 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" event={"ID":"9dc9e4d6-867f-4315-b1d2-0b186eaae912","Type":"ContainerStarted","Data":"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac"} Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.401660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" event={"ID":"9dc9e4d6-867f-4315-b1d2-0b186eaae912","Type":"ContainerStarted","Data":"67253fc7deb708354f3a5974ec459dacb7faae233e90ea1591e41f47affa06ad"} Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.401936 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.402673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" event={"ID":"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8","Type":"ContainerStarted","Data":"fe2f2ea7d7c6a5bb3d16eb7dbafb0826c43cbf05cc3a33b81d8f0d6ebd764ccd"} Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.402694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" event={"ID":"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8","Type":"ContainerStarted","Data":"90ddfee52151fb7ded6578af360f8c1959f11eff4fdc1c9ab4fe12adef37c9c6"} Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.403136 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.414623 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" podStartSLOduration=4.414613744 podStartE2EDuration="4.414613744s" podCreationTimestamp="2026-03-10 18:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:54:10.412844915 +0000 UTC m=+394.176280885" watchObservedRunningTime="2026-03-10 18:54:10.414613744 +0000 UTC m=+394.178049704" Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.429436 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" podStartSLOduration=4.429417388 podStartE2EDuration="4.429417388s" podCreationTimestamp="2026-03-10 18:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:54:10.427693329 +0000 UTC m=+394.191129309" watchObservedRunningTime="2026-03-10 18:54:10.429417388 +0000 UTC m=+394.192853358" Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.906026 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:10 crc kubenswrapper[4861]: I0310 18:54:10.907466 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:11 crc kubenswrapper[4861]: I0310 18:54:11.361272 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 10 18:54:18 crc kubenswrapper[4861]: I0310 18:54:18.451025 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerID="b54b2d8fbba5bcb4a72f7ece97d10d5a9038a3a0283e5679a5f7a4ab8c015b55" exitCode=0 Mar 10 18:54:18 crc kubenswrapper[4861]: I0310 18:54:18.451093 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerDied","Data":"b54b2d8fbba5bcb4a72f7ece97d10d5a9038a3a0283e5679a5f7a4ab8c015b55"} Mar 10 18:54:18 crc kubenswrapper[4861]: I0310 18:54:18.452392 4861 scope.go:117] "RemoveContainer" containerID="b54b2d8fbba5bcb4a72f7ece97d10d5a9038a3a0283e5679a5f7a4ab8c015b55" Mar 10 18:54:19 crc kubenswrapper[4861]: I0310 18:54:19.460367 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerStarted","Data":"a9e4b40108ac235c7db1060fb2692561d1a8668a2d97b229aca23bef36731521"} Mar 10 18:54:19 crc kubenswrapper[4861]: I0310 18:54:19.460965 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:54:19 crc kubenswrapper[4861]: I0310 18:54:19.462161 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:54:21 crc kubenswrapper[4861]: I0310 18:54:21.958584 4861 scope.go:117] "RemoveContainer" containerID="c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0" Mar 10 18:54:21 crc kubenswrapper[4861]: E0310 18:54:21.959324 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 10 18:54:26 crc kubenswrapper[4861]: I0310 18:54:26.851837 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:26 crc kubenswrapper[4861]: I0310 18:54:26.852483 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" podUID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" containerName="controller-manager" containerID="cri-o://fe2f2ea7d7c6a5bb3d16eb7dbafb0826c43cbf05cc3a33b81d8f0d6ebd764ccd" gracePeriod=30 Mar 10 18:54:26 crc kubenswrapper[4861]: I0310 18:54:26.871167 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:26 crc kubenswrapper[4861]: I0310 18:54:26.871812 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" podUID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" containerName="route-controller-manager" containerID="cri-o://af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac" gracePeriod=30 Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.509878 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.514071 4861 generic.go:334] "Generic (PLEG): container finished" podID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" containerID="af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac" exitCode=0 Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.514169 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" event={"ID":"9dc9e4d6-867f-4315-b1d2-0b186eaae912","Type":"ContainerDied","Data":"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac"} Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.514208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" event={"ID":"9dc9e4d6-867f-4315-b1d2-0b186eaae912","Type":"ContainerDied","Data":"67253fc7deb708354f3a5974ec459dacb7faae233e90ea1591e41f47affa06ad"} Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.514235 4861 scope.go:117] "RemoveContainer" containerID="af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.514388 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.517695 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" containerID="fe2f2ea7d7c6a5bb3d16eb7dbafb0826c43cbf05cc3a33b81d8f0d6ebd764ccd" exitCode=0 Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.517844 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" event={"ID":"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8","Type":"ContainerDied","Data":"fe2f2ea7d7c6a5bb3d16eb7dbafb0826c43cbf05cc3a33b81d8f0d6ebd764ccd"} Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.569651 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.580271 4861 scope.go:117] "RemoveContainer" containerID="af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac" Mar 10 18:54:27 crc kubenswrapper[4861]: E0310 18:54:27.580655 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac\": container with ID starting with af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac not found: ID does not exist" containerID="af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.580818 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac"} err="failed to get container status \"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac\": rpc error: code = NotFound desc = could not find container \"af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac\": container with ID starting with af9616471ad4f551a1f42369b5096769c4083e5ce622f952ead2551f118a34ac not found: ID does not exist" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.615615 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca\") pod \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.615688 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpk5\" (UniqueName: \"kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5\") pod \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.615728 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config\") pod \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.615755 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert\") pod \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\" (UID: \"9dc9e4d6-867f-4315-b1d2-0b186eaae912\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.616415 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca" (OuterVolumeSpecName: "client-ca") pod "9dc9e4d6-867f-4315-b1d2-0b186eaae912" (UID: "9dc9e4d6-867f-4315-b1d2-0b186eaae912"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.616587 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config" (OuterVolumeSpecName: "config") pod "9dc9e4d6-867f-4315-b1d2-0b186eaae912" (UID: "9dc9e4d6-867f-4315-b1d2-0b186eaae912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.621634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5" (OuterVolumeSpecName: "kube-api-access-5xpk5") pod "9dc9e4d6-867f-4315-b1d2-0b186eaae912" (UID: "9dc9e4d6-867f-4315-b1d2-0b186eaae912"). InnerVolumeSpecName "kube-api-access-5xpk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.622689 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9dc9e4d6-867f-4315-b1d2-0b186eaae912" (UID: "9dc9e4d6-867f-4315-b1d2-0b186eaae912"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.717193 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config\") pod \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.717385 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca\") pod \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.717538 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert\") pod \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.717640 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles\") pod \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.717682 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blvjl\" (UniqueName: \"kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl\") pod \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\" (UID: \"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8\") " Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718127 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpk5\" (UniqueName: \"kubernetes.io/projected/9dc9e4d6-867f-4315-b1d2-0b186eaae912-kube-api-access-5xpk5\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718173 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718194 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dc9e4d6-867f-4315-b1d2-0b186eaae912-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718214 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dc9e4d6-867f-4315-b1d2-0b186eaae912-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" (UID: "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718540 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config" (OuterVolumeSpecName: "config") pod "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" (UID: "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.718604 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" (UID: "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.720837 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" (UID: "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.722036 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl" (OuterVolumeSpecName: "kube-api-access-blvjl") pod "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" (UID: "2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8"). InnerVolumeSpecName "kube-api-access-blvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.819281 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.819684 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blvjl\" (UniqueName: \"kubernetes.io/projected/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-kube-api-access-blvjl\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.819732 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.819750 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.819771 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.863629 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:27 crc kubenswrapper[4861]: I0310 18:54:27.871378 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-lpbss"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.527156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" event={"ID":"2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8","Type":"ContainerDied","Data":"90ddfee52151fb7ded6578af360f8c1959f11eff4fdc1c9ab4fe12adef37c9c6"} Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.527232 4861 scope.go:117] "RemoveContainer" containerID="fe2f2ea7d7c6a5bb3d16eb7dbafb0826c43cbf05cc3a33b81d8f0d6ebd764ccd" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.527260 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-dkwxt" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.574399 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.581419 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-dkwxt"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.873591 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:54:28 crc kubenswrapper[4861]: E0310 18:54:28.873946 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" containerName="controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.873969 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" containerName="controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: E0310 18:54:28.874007 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" containerName="route-controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.874018 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" containerName="route-controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.874168 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" containerName="route-controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.874200 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" containerName="controller-manager" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.874797 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.878634 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.879151 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.879309 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.879548 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.879550 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.881949 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.891940 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.892856 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.893965 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.898178 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.899141 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.899185 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.899242 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.899660 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.901461 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.904290 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.917169 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.967406 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8" path="/var/lib/kubelet/pods/2f583a07-39e7-4f6f-8fa1-5a4bd9f5bba8/volumes" Mar 10 18:54:28 crc kubenswrapper[4861]: I0310 18:54:28.968567 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc9e4d6-867f-4315-b1d2-0b186eaae912" path="/var/lib/kubelet/pods/9dc9e4d6-867f-4315-b1d2-0b186eaae912/volumes" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.039741 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.039811 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bff\" (UniqueName: \"kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.039902 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.039943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.039977 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.040026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.040154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.040216 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.040336 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5vl\" (UniqueName: \"kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143383 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143471 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143582 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5vl\" (UniqueName: \"kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143866 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.143900 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bff\" (UniqueName: \"kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.145222 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.147347 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.147745 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.148745 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.149419 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.152048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.157498 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.172420 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5vl\" (UniqueName: \"kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl\") pod \"controller-manager-66b89bc4-lzgsc\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.173492 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bff\" (UniqueName: \"kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff\") pod \"route-controller-manager-5c945c94b8-8tf66\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.214664 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.229750 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.535921 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:54:29 crc kubenswrapper[4861]: W0310 18:54:29.540424 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda6e9e1_fc3d_483f_93f8_2fbbc60113e6.slice/crio-f157a4cd4b8aaafa9707b8b121ca1d95a05ea0abf63040f541c24b4492db2f60 WatchSource:0}: Error finding container f157a4cd4b8aaafa9707b8b121ca1d95a05ea0abf63040f541c24b4492db2f60: Status 404 returned error can't find the container with id f157a4cd4b8aaafa9707b8b121ca1d95a05ea0abf63040f541c24b4492db2f60 Mar 10 18:54:29 crc kubenswrapper[4861]: I0310 18:54:29.705044 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:54:29 crc kubenswrapper[4861]: W0310 18:54:29.717010 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabefbbfb_3299_4e64_9075_16e4d6e3461c.slice/crio-63d27368ffa12a3f6cd3879f3b1dc606b2a151a2fcb1b7cdfeb1ad6518cc5168 WatchSource:0}: Error finding container 63d27368ffa12a3f6cd3879f3b1dc606b2a151a2fcb1b7cdfeb1ad6518cc5168: Status 404 returned error can't find the container with id 63d27368ffa12a3f6cd3879f3b1dc606b2a151a2fcb1b7cdfeb1ad6518cc5168 Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.546557 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" event={"ID":"abefbbfb-3299-4e64-9075-16e4d6e3461c","Type":"ContainerStarted","Data":"70d1c3e98300589644fbf1a5095cb17bd89c553f6120a45e61ad2cfc531f1fee"} Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.546999 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.547015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" event={"ID":"abefbbfb-3299-4e64-9075-16e4d6e3461c","Type":"ContainerStarted","Data":"63d27368ffa12a3f6cd3879f3b1dc606b2a151a2fcb1b7cdfeb1ad6518cc5168"} Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.548297 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" event={"ID":"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6","Type":"ContainerStarted","Data":"e187dcd1f11bb9d229e6cdd2a82d4ca69988c6465cd65ac9ec0a06ec1c4218c8"} Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.548335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" event={"ID":"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6","Type":"ContainerStarted","Data":"f157a4cd4b8aaafa9707b8b121ca1d95a05ea0abf63040f541c24b4492db2f60"} Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.548736 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.556162 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.558361 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.583051 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" podStartSLOduration=4.58301416 podStartE2EDuration="4.58301416s" podCreationTimestamp="2026-03-10 18:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:54:30.577064558 +0000 UTC m=+414.340500548" watchObservedRunningTime="2026-03-10 18:54:30.58301416 +0000 UTC m=+414.346450130" Mar 10 18:54:30 crc kubenswrapper[4861]: I0310 18:54:30.660876 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" podStartSLOduration=4.6608540210000005 podStartE2EDuration="4.660854021s" podCreationTimestamp="2026-03-10 18:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:54:30.655236193 +0000 UTC m=+414.418672193" watchObservedRunningTime="2026-03-10 18:54:30.660854021 +0000 UTC m=+414.424289991" Mar 10 18:54:35 crc kubenswrapper[4861]: I0310 18:54:35.958118 4861 scope.go:117] "RemoveContainer" containerID="c403d991c0f4d1e23ec6e829f9031d853d144949afedf21b303f102eda678df0" Mar 10 18:54:36 crc kubenswrapper[4861]: I0310 18:54:36.592432 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/3.log" Mar 10 18:54:36 crc kubenswrapper[4861]: I0310 18:54:36.592557 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8e7e48dc2b67f5f9df29de73a1f2a865b060d63e88122a5843d78899724c5af3"} Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.753882 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vrbxv"] Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.755431 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.771286 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vrbxv"] Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854152 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-registry-tls\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854225 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff901024-935e-4fc3-98fb-0032deb4976c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854388 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-bound-sa-token\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854506 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff901024-935e-4fc3-98fb-0032deb4976c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854594 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-trusted-ca\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854648 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-registry-certificates\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.854675 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87hxp\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-kube-api-access-87hxp\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.874025 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.955849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-registry-certificates\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.955890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87hxp\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-kube-api-access-87hxp\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.955908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-registry-tls\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.955928 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff901024-935e-4fc3-98fb-0032deb4976c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.955963 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-bound-sa-token\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.956007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff901024-935e-4fc3-98fb-0032deb4976c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.956033 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-trusted-ca\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.957013 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-trusted-ca\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.958355 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff901024-935e-4fc3-98fb-0032deb4976c-registry-certificates\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.958486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff901024-935e-4fc3-98fb-0032deb4976c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.965104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff901024-935e-4fc3-98fb-0032deb4976c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.965676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-registry-tls\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.975067 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87hxp\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-kube-api-access-87hxp\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:09 crc kubenswrapper[4861]: I0310 18:55:09.979599 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff901024-935e-4fc3-98fb-0032deb4976c-bound-sa-token\") pod \"image-registry-66df7c8f76-vrbxv\" (UID: \"ff901024-935e-4fc3-98fb-0032deb4976c\") " pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:10 crc kubenswrapper[4861]: I0310 18:55:10.072746 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:10 crc kubenswrapper[4861]: I0310 18:55:10.548487 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vrbxv"] Mar 10 18:55:10 crc kubenswrapper[4861]: W0310 18:55:10.553851 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff901024_935e_4fc3_98fb_0032deb4976c.slice/crio-1ef3ece6a9071bebe8b8f81495c6b2231dbe436f9180426dd3a1f066a2809b14 WatchSource:0}: Error finding container 1ef3ece6a9071bebe8b8f81495c6b2231dbe436f9180426dd3a1f066a2809b14: Status 404 returned error can't find the container with id 1ef3ece6a9071bebe8b8f81495c6b2231dbe436f9180426dd3a1f066a2809b14 Mar 10 18:55:10 crc kubenswrapper[4861]: I0310 18:55:10.856566 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" event={"ID":"ff901024-935e-4fc3-98fb-0032deb4976c","Type":"ContainerStarted","Data":"063fac30ccbb55043eefce6017439957e8620bb51f0a4899174baef024cd7ac1"} Mar 10 18:55:10 crc kubenswrapper[4861]: I0310 18:55:10.856608 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" event={"ID":"ff901024-935e-4fc3-98fb-0032deb4976c","Type":"ContainerStarted","Data":"1ef3ece6a9071bebe8b8f81495c6b2231dbe436f9180426dd3a1f066a2809b14"} Mar 10 18:55:10 crc kubenswrapper[4861]: I0310 18:55:10.857254 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:21 crc kubenswrapper[4861]: I0310 18:55:21.991894 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:55:21 crc kubenswrapper[4861]: I0310 18:55:21.992440 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:55:26 crc kubenswrapper[4861]: I0310 18:55:26.823775 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" podStartSLOduration=17.823754744 podStartE2EDuration="17.823754744s" podCreationTimestamp="2026-03-10 18:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:55:10.899938838 +0000 UTC m=+454.663374808" watchObservedRunningTime="2026-03-10 18:55:26.823754744 +0000 UTC m=+470.587190704" Mar 10 18:55:26 crc kubenswrapper[4861]: I0310 18:55:26.826507 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:55:26 crc kubenswrapper[4861]: I0310 18:55:26.826756 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" podUID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" containerName="controller-manager" containerID="cri-o://e187dcd1f11bb9d229e6cdd2a82d4ca69988c6465cd65ac9ec0a06ec1c4218c8" gracePeriod=30 Mar 10 18:55:26 crc kubenswrapper[4861]: I0310 18:55:26.980743 4861 generic.go:334] "Generic (PLEG): container finished" podID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" containerID="e187dcd1f11bb9d229e6cdd2a82d4ca69988c6465cd65ac9ec0a06ec1c4218c8" exitCode=0 Mar 10 18:55:26 crc kubenswrapper[4861]: I0310 18:55:26.980868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" event={"ID":"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6","Type":"ContainerDied","Data":"e187dcd1f11bb9d229e6cdd2a82d4ca69988c6465cd65ac9ec0a06ec1c4218c8"} Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.253223 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.338631 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert\") pod \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.338676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca\") pod \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.338800 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles\") pod \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.338844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5vl\" (UniqueName: \"kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl\") pod \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.338862 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config\") pod \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\" (UID: \"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6\") " Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.339422 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" (UID: "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.339456 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config" (OuterVolumeSpecName: "config") pod "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" (UID: "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.340181 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" (UID: "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.343983 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" (UID: "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.344085 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl" (OuterVolumeSpecName: "kube-api-access-xz5vl") pod "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" (UID: "eda6e9e1-fc3d-483f-93f8-2fbbc60113e6"). InnerVolumeSpecName "kube-api-access-xz5vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.440321 4861 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.440375 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz5vl\" (UniqueName: \"kubernetes.io/projected/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-kube-api-access-xz5vl\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.440404 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.440424 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.440441 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.932458 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-v5phr"] Mar 10 18:55:27 crc kubenswrapper[4861]: E0310 18:55:27.932693 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" containerName="controller-manager" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.932725 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" containerName="controller-manager" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.932843 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" containerName="controller-manager" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.937367 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.958197 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-v5phr"] Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.989963 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" event={"ID":"eda6e9e1-fc3d-483f-93f8-2fbbc60113e6","Type":"ContainerDied","Data":"f157a4cd4b8aaafa9707b8b121ca1d95a05ea0abf63040f541c24b4492db2f60"} Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.990040 4861 scope.go:117] "RemoveContainer" containerID="e187dcd1f11bb9d229e6cdd2a82d4ca69988c6465cd65ac9ec0a06ec1c4218c8" Mar 10 18:55:27 crc kubenswrapper[4861]: I0310 18:55:27.990083 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-lzgsc" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.031302 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.035529 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-lzgsc"] Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.049063 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzctw\" (UniqueName: \"kubernetes.io/projected/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-kube-api-access-gzctw\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.049190 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.049270 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-config\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.049500 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-client-ca\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.049595 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-serving-cert\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.150825 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzctw\" (UniqueName: \"kubernetes.io/projected/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-kube-api-access-gzctw\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.150897 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.150980 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-config\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.151056 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-client-ca\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.151097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-serving-cert\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.152474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-config\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.152945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-client-ca\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.153223 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-proxy-ca-bundles\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.160332 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-serving-cert\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.176037 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzctw\" (UniqueName: \"kubernetes.io/projected/da0f1a90-6dd2-457c-b98b-40fa9e601b7b-kube-api-access-gzctw\") pod \"controller-manager-66d744d49f-v5phr\" (UID: \"da0f1a90-6dd2-457c-b98b-40fa9e601b7b\") " pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.276577 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.557807 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66d744d49f-v5phr"] Mar 10 18:55:28 crc kubenswrapper[4861]: W0310 18:55:28.568437 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda0f1a90_6dd2_457c_b98b_40fa9e601b7b.slice/crio-ce4626acd0a4a13c779109c0a62124a3a9100a527f4204b99fea2fa01383a1e8 WatchSource:0}: Error finding container ce4626acd0a4a13c779109c0a62124a3a9100a527f4204b99fea2fa01383a1e8: Status 404 returned error can't find the container with id ce4626acd0a4a13c779109c0a62124a3a9100a527f4204b99fea2fa01383a1e8 Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.969963 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda6e9e1-fc3d-483f-93f8-2fbbc60113e6" path="/var/lib/kubelet/pods/eda6e9e1-fc3d-483f-93f8-2fbbc60113e6/volumes" Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.997902 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" event={"ID":"da0f1a90-6dd2-457c-b98b-40fa9e601b7b","Type":"ContainerStarted","Data":"f35209d0c7bf65390de0989bfa35baa8695d53f30b153ef0635150665996d40b"} Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.997972 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" event={"ID":"da0f1a90-6dd2-457c-b98b-40fa9e601b7b","Type":"ContainerStarted","Data":"ce4626acd0a4a13c779109c0a62124a3a9100a527f4204b99fea2fa01383a1e8"} Mar 10 18:55:28 crc kubenswrapper[4861]: I0310 18:55:28.998477 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:29 crc kubenswrapper[4861]: I0310 18:55:29.005799 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" Mar 10 18:55:29 crc kubenswrapper[4861]: I0310 18:55:29.032719 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66d744d49f-v5phr" podStartSLOduration=3.032686167 podStartE2EDuration="3.032686167s" podCreationTimestamp="2026-03-10 18:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:55:29.027758147 +0000 UTC m=+472.791194197" watchObservedRunningTime="2026-03-10 18:55:29.032686167 +0000 UTC m=+472.796122137" Mar 10 18:55:30 crc kubenswrapper[4861]: I0310 18:55:30.086144 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vrbxv" Mar 10 18:55:30 crc kubenswrapper[4861]: I0310 18:55:30.163475 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:55:46 crc kubenswrapper[4861]: I0310 18:55:46.840046 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:55:46 crc kubenswrapper[4861]: I0310 18:55:46.840782 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" podUID="abefbbfb-3299-4e64-9075-16e4d6e3461c" containerName="route-controller-manager" containerID="cri-o://70d1c3e98300589644fbf1a5095cb17bd89c553f6120a45e61ad2cfc531f1fee" gracePeriod=30 Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.163061 4861 generic.go:334] "Generic (PLEG): container finished" podID="abefbbfb-3299-4e64-9075-16e4d6e3461c" containerID="70d1c3e98300589644fbf1a5095cb17bd89c553f6120a45e61ad2cfc531f1fee" exitCode=0 Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.163228 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" event={"ID":"abefbbfb-3299-4e64-9075-16e4d6e3461c","Type":"ContainerDied","Data":"70d1c3e98300589644fbf1a5095cb17bd89c553f6120a45e61ad2cfc531f1fee"} Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.454387 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.472174 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bff\" (UniqueName: \"kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff\") pod \"abefbbfb-3299-4e64-9075-16e4d6e3461c\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.472258 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert\") pod \"abefbbfb-3299-4e64-9075-16e4d6e3461c\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.472318 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca\") pod \"abefbbfb-3299-4e64-9075-16e4d6e3461c\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.473779 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca" (OuterVolumeSpecName: "client-ca") pod "abefbbfb-3299-4e64-9075-16e4d6e3461c" (UID: "abefbbfb-3299-4e64-9075-16e4d6e3461c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.536551 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abefbbfb-3299-4e64-9075-16e4d6e3461c" (UID: "abefbbfb-3299-4e64-9075-16e4d6e3461c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.536653 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff" (OuterVolumeSpecName: "kube-api-access-92bff") pod "abefbbfb-3299-4e64-9075-16e4d6e3461c" (UID: "abefbbfb-3299-4e64-9075-16e4d6e3461c"). InnerVolumeSpecName "kube-api-access-92bff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.573158 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config\") pod \"abefbbfb-3299-4e64-9075-16e4d6e3461c\" (UID: \"abefbbfb-3299-4e64-9075-16e4d6e3461c\") " Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.573433 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bff\" (UniqueName: \"kubernetes.io/projected/abefbbfb-3299-4e64-9075-16e4d6e3461c-kube-api-access-92bff\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.573445 4861 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abefbbfb-3299-4e64-9075-16e4d6e3461c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.573455 4861 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.573761 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config" (OuterVolumeSpecName: "config") pod "abefbbfb-3299-4e64-9075-16e4d6e3461c" (UID: "abefbbfb-3299-4e64-9075-16e4d6e3461c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.675034 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abefbbfb-3299-4e64-9075-16e4d6e3461c-config\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.947345 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4"] Mar 10 18:55:47 crc kubenswrapper[4861]: E0310 18:55:47.947763 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abefbbfb-3299-4e64-9075-16e4d6e3461c" containerName="route-controller-manager" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.947774 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="abefbbfb-3299-4e64-9075-16e4d6e3461c" containerName="route-controller-manager" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.947860 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="abefbbfb-3299-4e64-9075-16e4d6e3461c" containerName="route-controller-manager" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.948176 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.964045 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4"] Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.978788 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-client-ca\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.979464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-config\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.981416 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7lg\" (UniqueName: \"kubernetes.io/projected/a444ec77-f7f2-4369-a871-f1ce66a9f98b-kube-api-access-sk7lg\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:47 crc kubenswrapper[4861]: I0310 18:55:47.981476 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a444ec77-f7f2-4369-a871-f1ce66a9f98b-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.082265 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-config\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.082336 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7lg\" (UniqueName: \"kubernetes.io/projected/a444ec77-f7f2-4369-a871-f1ce66a9f98b-kube-api-access-sk7lg\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.082378 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a444ec77-f7f2-4369-a871-f1ce66a9f98b-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.083457 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-client-ca\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.084255 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-client-ca\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.085126 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a444ec77-f7f2-4369-a871-f1ce66a9f98b-config\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.087761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a444ec77-f7f2-4369-a871-f1ce66a9f98b-serving-cert\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.097212 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7lg\" (UniqueName: \"kubernetes.io/projected/a444ec77-f7f2-4369-a871-f1ce66a9f98b-kube-api-access-sk7lg\") pod \"route-controller-manager-74cc6c9d5c-6nzp4\" (UID: \"a444ec77-f7f2-4369-a871-f1ce66a9f98b\") " pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.169771 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" event={"ID":"abefbbfb-3299-4e64-9075-16e4d6e3461c","Type":"ContainerDied","Data":"63d27368ffa12a3f6cd3879f3b1dc606b2a151a2fcb1b7cdfeb1ad6518cc5168"} Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.169843 4861 scope.go:117] "RemoveContainer" containerID="70d1c3e98300589644fbf1a5095cb17bd89c553f6120a45e61ad2cfc531f1fee" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.170032 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.212464 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.216599 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c945c94b8-8tf66"] Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.275212 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.757220 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4"] Mar 10 18:55:48 crc kubenswrapper[4861]: I0310 18:55:48.967213 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abefbbfb-3299-4e64-9075-16e4d6e3461c" path="/var/lib/kubelet/pods/abefbbfb-3299-4e64-9075-16e4d6e3461c/volumes" Mar 10 18:55:49 crc kubenswrapper[4861]: I0310 18:55:49.177790 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" event={"ID":"a444ec77-f7f2-4369-a871-f1ce66a9f98b","Type":"ContainerStarted","Data":"12f9e2293d52cfa77797ffa95431d6db140a9e65e71bc0487e89919e718528f2"} Mar 10 18:55:49 crc kubenswrapper[4861]: I0310 18:55:49.177837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" event={"ID":"a444ec77-f7f2-4369-a871-f1ce66a9f98b","Type":"ContainerStarted","Data":"68836e4aff3da3ed4951d3af3569d1db9c3b99d7c893e65ba3001dee863edb34"} Mar 10 18:55:49 crc kubenswrapper[4861]: I0310 18:55:49.178130 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:49 crc kubenswrapper[4861]: I0310 18:55:49.201228 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" podStartSLOduration=3.201201556 podStartE2EDuration="3.201201556s" podCreationTimestamp="2026-03-10 18:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:55:49.19923621 +0000 UTC m=+492.962672190" watchObservedRunningTime="2026-03-10 18:55:49.201201556 +0000 UTC m=+492.964637556" Mar 10 18:55:49 crc kubenswrapper[4861]: I0310 18:55:49.352595 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74cc6c9d5c-6nzp4" Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.717424 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.734032 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hw926" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="registry-server" containerID="cri-o://127551d9904d53fc28ea6d014a574f56d6e5c500474fdd4ac5795d712fe8e5f4" gracePeriod=30 Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.742906 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.743232 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pmtqc" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="registry-server" containerID="cri-o://8228718be0002a5ecc4ba8dbaec34283f30812431d34664a126be79d821b5589" gracePeriod=30 Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.767905 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.768067 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" containerID="cri-o://a9e4b40108ac235c7db1060fb2692561d1a8668a2d97b229aca23bef36731521" gracePeriod=30 Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.782606 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.782960 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-646bt" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="registry-server" containerID="cri-o://4e12b40346a7176f84ae4be91f0b485eaab5f60de20461b9e51f58cd79f06d05" gracePeriod=30 Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.804784 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.805078 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2hgk" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="registry-server" containerID="cri-o://627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" gracePeriod=30 Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.832636 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q9slc"] Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.833410 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:50 crc kubenswrapper[4861]: I0310 18:55:50.846662 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q9slc"] Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.021004 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.022600 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffjp\" (UniqueName: \"kubernetes.io/projected/de5224d4-6cff-4974-a40c-9d5d44ee55fb-kube-api-access-6ffjp\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.022684 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.124432 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.125065 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffjp\" (UniqueName: \"kubernetes.io/projected/de5224d4-6cff-4974-a40c-9d5d44ee55fb-kube-api-access-6ffjp\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.125148 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.126595 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.134682 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de5224d4-6cff-4974-a40c-9d5d44ee55fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.180331 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffjp\" (UniqueName: \"kubernetes.io/projected/de5224d4-6cff-4974-a40c-9d5d44ee55fb-kube-api-access-6ffjp\") pod \"marketplace-operator-79b997595-q9slc\" (UID: \"de5224d4-6cff-4974-a40c-9d5d44ee55fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.191056 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerID="a9e4b40108ac235c7db1060fb2692561d1a8668a2d97b229aca23bef36731521" exitCode=0 Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.191116 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerDied","Data":"a9e4b40108ac235c7db1060fb2692561d1a8668a2d97b229aca23bef36731521"} Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.191149 4861 scope.go:117] "RemoveContainer" containerID="b54b2d8fbba5bcb4a72f7ece97d10d5a9038a3a0283e5679a5f7a4ab8c015b55" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.193443 4861 generic.go:334] "Generic (PLEG): container finished" podID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerID="4e12b40346a7176f84ae4be91f0b485eaab5f60de20461b9e51f58cd79f06d05" exitCode=0 Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.193479 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerDied","Data":"4e12b40346a7176f84ae4be91f0b485eaab5f60de20461b9e51f58cd79f06d05"} Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.195523 4861 generic.go:334] "Generic (PLEG): container finished" podID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerID="127551d9904d53fc28ea6d014a574f56d6e5c500474fdd4ac5795d712fe8e5f4" exitCode=0 Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.195555 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerDied","Data":"127551d9904d53fc28ea6d014a574f56d6e5c500474fdd4ac5795d712fe8e5f4"} Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.198855 4861 generic.go:334] "Generic (PLEG): container finished" podID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerID="627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" exitCode=0 Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.198926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerDied","Data":"627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b"} Mar 10 18:55:51 crc kubenswrapper[4861]: E0310 18:55:51.199752 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b is running failed: container process not found" containerID="627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 18:55:51 crc kubenswrapper[4861]: E0310 18:55:51.200057 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b is running failed: container process not found" containerID="627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 18:55:51 crc kubenswrapper[4861]: E0310 18:55:51.200500 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b is running failed: container process not found" containerID="627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 18:55:51 crc kubenswrapper[4861]: E0310 18:55:51.200530 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-q2hgk" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="registry-server" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.202451 4861 generic.go:334] "Generic (PLEG): container finished" podID="edc16f3e-454b-4167-9d26-c50bba23281e" containerID="8228718be0002a5ecc4ba8dbaec34283f30812431d34664a126be79d821b5589" exitCode=0 Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.202673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerDied","Data":"8228718be0002a5ecc4ba8dbaec34283f30812431d34664a126be79d821b5589"} Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.288848 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.318903 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.421093 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.429885 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities\") pod \"1e1c83ef-91ae-4931-8e31-32890189bb47\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.430184 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77wsn\" (UniqueName: \"kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn\") pod \"1e1c83ef-91ae-4931-8e31-32890189bb47\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.430292 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content\") pod \"1e1c83ef-91ae-4931-8e31-32890189bb47\" (UID: \"1e1c83ef-91ae-4931-8e31-32890189bb47\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.430852 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities" (OuterVolumeSpecName: "utilities") pod "1e1c83ef-91ae-4931-8e31-32890189bb47" (UID: "1e1c83ef-91ae-4931-8e31-32890189bb47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.431142 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.437962 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn" (OuterVolumeSpecName: "kube-api-access-77wsn") pod "1e1c83ef-91ae-4931-8e31-32890189bb47" (UID: "1e1c83ef-91ae-4931-8e31-32890189bb47"). InnerVolumeSpecName "kube-api-access-77wsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.477376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e1c83ef-91ae-4931-8e31-32890189bb47" (UID: "1e1c83ef-91ae-4931-8e31-32890189bb47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.532481 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca\") pod \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.532640 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlgq\" (UniqueName: \"kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq\") pod \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.532730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics\") pod \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\" (UID: \"8f1b1590-e261-4e1f-9427-039f5a9b3db7\") " Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.533022 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e1c83ef-91ae-4931-8e31-32890189bb47-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.533038 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77wsn\" (UniqueName: \"kubernetes.io/projected/1e1c83ef-91ae-4931-8e31-32890189bb47-kube-api-access-77wsn\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.533919 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f1b1590-e261-4e1f-9427-039f5a9b3db7" (UID: "8f1b1590-e261-4e1f-9427-039f5a9b3db7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.535972 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f1b1590-e261-4e1f-9427-039f5a9b3db7" (UID: "8f1b1590-e261-4e1f-9427-039f5a9b3db7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.537684 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq" (OuterVolumeSpecName: "kube-api-access-jzlgq") pod "8f1b1590-e261-4e1f-9427-039f5a9b3db7" (UID: "8f1b1590-e261-4e1f-9427-039f5a9b3db7"). InnerVolumeSpecName "kube-api-access-jzlgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.634616 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzlgq\" (UniqueName: \"kubernetes.io/projected/8f1b1590-e261-4e1f-9427-039f5a9b3db7-kube-api-access-jzlgq\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.634657 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.634671 4861 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f1b1590-e261-4e1f-9427-039f5a9b3db7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.725656 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q9slc"] Mar 10 18:55:51 crc kubenswrapper[4861]: W0310 18:55:51.734273 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde5224d4_6cff_4974_a40c_9d5d44ee55fb.slice/crio-3a5b6194f12cee9cf6d8d25632853d5efac8b3f02d0befb1973c1e193901d4cb WatchSource:0}: Error finding container 3a5b6194f12cee9cf6d8d25632853d5efac8b3f02d0befb1973c1e193901d4cb: Status 404 returned error can't find the container with id 3a5b6194f12cee9cf6d8d25632853d5efac8b3f02d0befb1973c1e193901d4cb Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.938405 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.942780 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.991748 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:55:51 crc kubenswrapper[4861]: I0310 18:55:51.992021 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.007495 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content\") pod \"edc16f3e-454b-4167-9d26-c50bba23281e\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038118 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lghk9\" (UniqueName: \"kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9\") pod \"edc16f3e-454b-4167-9d26-c50bba23281e\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038159 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content\") pod \"c76065df-dec9-4b14-bd49-8e2d134bf53f\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038180 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities\") pod \"edc16f3e-454b-4167-9d26-c50bba23281e\" (UID: \"edc16f3e-454b-4167-9d26-c50bba23281e\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fckjm\" (UniqueName: \"kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm\") pod \"c76065df-dec9-4b14-bd49-8e2d134bf53f\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.038219 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities\") pod \"c76065df-dec9-4b14-bd49-8e2d134bf53f\" (UID: \"c76065df-dec9-4b14-bd49-8e2d134bf53f\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.039028 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities" (OuterVolumeSpecName: "utilities") pod "c76065df-dec9-4b14-bd49-8e2d134bf53f" (UID: "c76065df-dec9-4b14-bd49-8e2d134bf53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.039484 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities" (OuterVolumeSpecName: "utilities") pod "edc16f3e-454b-4167-9d26-c50bba23281e" (UID: "edc16f3e-454b-4167-9d26-c50bba23281e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.043194 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9" (OuterVolumeSpecName: "kube-api-access-lghk9") pod "edc16f3e-454b-4167-9d26-c50bba23281e" (UID: "edc16f3e-454b-4167-9d26-c50bba23281e"). InnerVolumeSpecName "kube-api-access-lghk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.043362 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm" (OuterVolumeSpecName: "kube-api-access-fckjm") pod "c76065df-dec9-4b14-bd49-8e2d134bf53f" (UID: "c76065df-dec9-4b14-bd49-8e2d134bf53f"). InnerVolumeSpecName "kube-api-access-fckjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.096023 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c76065df-dec9-4b14-bd49-8e2d134bf53f" (UID: "c76065df-dec9-4b14-bd49-8e2d134bf53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.096848 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edc16f3e-454b-4167-9d26-c50bba23281e" (UID: "edc16f3e-454b-4167-9d26-c50bba23281e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.139673 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content\") pod \"876b4458-98a1-4dc2-af8a-3390a56cad59\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.139769 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities\") pod \"876b4458-98a1-4dc2-af8a-3390a56cad59\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.139835 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fcmd\" (UniqueName: \"kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd\") pod \"876b4458-98a1-4dc2-af8a-3390a56cad59\" (UID: \"876b4458-98a1-4dc2-af8a-3390a56cad59\") " Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140212 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140269 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lghk9\" (UniqueName: \"kubernetes.io/projected/edc16f3e-454b-4167-9d26-c50bba23281e-kube-api-access-lghk9\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140288 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140301 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edc16f3e-454b-4167-9d26-c50bba23281e-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140314 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fckjm\" (UniqueName: \"kubernetes.io/projected/c76065df-dec9-4b14-bd49-8e2d134bf53f-kube-api-access-fckjm\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.140326 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c76065df-dec9-4b14-bd49-8e2d134bf53f-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.141597 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities" (OuterVolumeSpecName: "utilities") pod "876b4458-98a1-4dc2-af8a-3390a56cad59" (UID: "876b4458-98a1-4dc2-af8a-3390a56cad59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.143505 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd" (OuterVolumeSpecName: "kube-api-access-5fcmd") pod "876b4458-98a1-4dc2-af8a-3390a56cad59" (UID: "876b4458-98a1-4dc2-af8a-3390a56cad59"). InnerVolumeSpecName "kube-api-access-5fcmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.209819 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" event={"ID":"de5224d4-6cff-4974-a40c-9d5d44ee55fb","Type":"ContainerStarted","Data":"46cab226cd62e3cfad97582af1bc1cfcbee1d5230836b7bd00060287c02ec2e6"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.209910 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" event={"ID":"de5224d4-6cff-4974-a40c-9d5d44ee55fb","Type":"ContainerStarted","Data":"3a5b6194f12cee9cf6d8d25632853d5efac8b3f02d0befb1973c1e193901d4cb"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.209941 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.211795 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" event={"ID":"8f1b1590-e261-4e1f-9427-039f5a9b3db7","Type":"ContainerDied","Data":"d6f7cac456be6dc86f1e9f71fa76263c16ddab968655d20bc54637796fee9de1"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.211835 4861 scope.go:117] "RemoveContainer" containerID="a9e4b40108ac235c7db1060fb2692561d1a8668a2d97b229aca23bef36731521" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.211888 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7qnc2" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.213118 4861 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-q9slc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.76:8080/healthz\": dial tcp 10.217.0.76:8080: connect: connection refused" start-of-body= Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.213268 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" podUID="de5224d4-6cff-4974-a40c-9d5d44ee55fb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.76:8080/healthz\": dial tcp 10.217.0.76:8080: connect: connection refused" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.218847 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-646bt" event={"ID":"1e1c83ef-91ae-4931-8e31-32890189bb47","Type":"ContainerDied","Data":"05f41fc4b765d6669ab51c308552e667e2296cdda8f6f72f2cc4eda39206381c"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.218877 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-646bt" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.224176 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw926" event={"ID":"c76065df-dec9-4b14-bd49-8e2d134bf53f","Type":"ContainerDied","Data":"7d9395a664bc150c221961fb873e407c7b5b8d7dfd3535666c231d78ea285070"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.224276 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw926" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.236539 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2hgk" event={"ID":"876b4458-98a1-4dc2-af8a-3390a56cad59","Type":"ContainerDied","Data":"69850f8c203930376fe471512330595032ea53e51de8e7d37c7eed5dc885f28e"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.237275 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2hgk" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.238008 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" podStartSLOduration=2.2379918930000002 podStartE2EDuration="2.237991893s" podCreationTimestamp="2026-03-10 18:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 18:55:52.234143202 +0000 UTC m=+495.997579172" watchObservedRunningTime="2026-03-10 18:55:52.237991893 +0000 UTC m=+496.001427843" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.241235 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.241268 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fcmd\" (UniqueName: \"kubernetes.io/projected/876b4458-98a1-4dc2-af8a-3390a56cad59-kube-api-access-5fcmd\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.244428 4861 scope.go:117] "RemoveContainer" containerID="4e12b40346a7176f84ae4be91f0b485eaab5f60de20461b9e51f58cd79f06d05" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.246029 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pmtqc" event={"ID":"edc16f3e-454b-4167-9d26-c50bba23281e","Type":"ContainerDied","Data":"52b0fdacaeceadab4c2d4e75b53c01b9be4de7265f92679870598947087ff619"} Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.246082 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pmtqc" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.263339 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "876b4458-98a1-4dc2-af8a-3390a56cad59" (UID: "876b4458-98a1-4dc2-af8a-3390a56cad59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.266642 4861 scope.go:117] "RemoveContainer" containerID="31bab73fd7f08e1e1b433ea5699ceaef28f723caa8436e845e3193fb5750f65e" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.278554 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.281404 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7qnc2"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.296192 4861 scope.go:117] "RemoveContainer" containerID="2d4a97493728b50e837560de824592b36d804dde9a445f084d9110824d2ea31e" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.308624 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.310158 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-646bt"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.314755 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.322807 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hw926"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.326722 4861 scope.go:117] "RemoveContainer" containerID="127551d9904d53fc28ea6d014a574f56d6e5c500474fdd4ac5795d712fe8e5f4" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.332727 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.344074 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/876b4458-98a1-4dc2-af8a-3390a56cad59-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.346459 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pmtqc"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.353044 4861 scope.go:117] "RemoveContainer" containerID="af3936d6228941727c3c59b023c3567fd3f97fc0f054efce14349d157c9836a6" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.406574 4861 scope.go:117] "RemoveContainer" containerID="34c81fcffd6f969071efa0454ddfce425949263457655ecc1aa8a344e6f5bebc" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.422051 4861 scope.go:117] "RemoveContainer" containerID="627702c408d75e5dc7cd60f67e6f39ea4d670ac03cdc4b5c51af4c09bcc5555b" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.435905 4861 scope.go:117] "RemoveContainer" containerID="03c2838958e0d91e1f377ee1e41caac9c02353f685a8053036d69f893f2615b2" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.453125 4861 scope.go:117] "RemoveContainer" containerID="05e72fc328cfcbce1a172ab772d29a6a63d39eecef2b19be5b91eddfda5ce588" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.471513 4861 scope.go:117] "RemoveContainer" containerID="8228718be0002a5ecc4ba8dbaec34283f30812431d34664a126be79d821b5589" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.485785 4861 scope.go:117] "RemoveContainer" containerID="4385ded0c7aaf97a76f7c00bbf95f00a0760df7cfe521c4a035b891175fc9a04" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.497900 4861 scope.go:117] "RemoveContainer" containerID="e61feebc54b69aa9c495d4f493861d28e49d38ce199aefe4ec38781afe5e016b" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.572964 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.575876 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2hgk"] Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.941632 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm67"] Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.941982 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942003 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942025 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942038 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942054 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942067 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942088 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942099 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942114 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942126 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942148 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942159 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942180 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942192 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942206 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942219 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942234 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942245 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942260 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942272 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942286 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942297 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="extract-utilities" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942314 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942326 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942341 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942352 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="extract-content" Mar 10 18:55:52 crc kubenswrapper[4861]: E0310 18:55:52.942366 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942378 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942527 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942548 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942575 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942594 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942619 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" containerName="registry-server" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.942636 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" containerName="marketplace-operator" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.943959 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.946115 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.975107 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1c83ef-91ae-4931-8e31-32890189bb47" path="/var/lib/kubelet/pods/1e1c83ef-91ae-4931-8e31-32890189bb47/volumes" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.976284 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876b4458-98a1-4dc2-af8a-3390a56cad59" path="/var/lib/kubelet/pods/876b4458-98a1-4dc2-af8a-3390a56cad59/volumes" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.977537 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1b1590-e261-4e1f-9427-039f5a9b3db7" path="/var/lib/kubelet/pods/8f1b1590-e261-4e1f-9427-039f5a9b3db7/volumes" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.979315 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c76065df-dec9-4b14-bd49-8e2d134bf53f" path="/var/lib/kubelet/pods/c76065df-dec9-4b14-bd49-8e2d134bf53f/volumes" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.980448 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc16f3e-454b-4167-9d26-c50bba23281e" path="/var/lib/kubelet/pods/edc16f3e-454b-4167-9d26-c50bba23281e/volumes" Mar 10 18:55:52 crc kubenswrapper[4861]: I0310 18:55:52.982251 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm67"] Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.067159 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-catalog-content\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.067227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-utilities\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.067254 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954nw\" (UniqueName: \"kubernetes.io/projected/9dc81a19-65da-4af1-aff0-a0d38434d370-kube-api-access-954nw\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.144086 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmxlp"] Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.146025 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.148816 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.151170 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmxlp"] Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.168416 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-catalog-content\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.168504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-utilities\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.168544 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954nw\" (UniqueName: \"kubernetes.io/projected/9dc81a19-65da-4af1-aff0-a0d38434d370-kube-api-access-954nw\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.169050 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-catalog-content\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.169335 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc81a19-65da-4af1-aff0-a0d38434d370-utilities\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.196362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954nw\" (UniqueName: \"kubernetes.io/projected/9dc81a19-65da-4af1-aff0-a0d38434d370-kube-api-access-954nw\") pod \"redhat-marketplace-dtm67\" (UID: \"9dc81a19-65da-4af1-aff0-a0d38434d370\") " pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.259969 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q9slc" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.269599 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggvz\" (UniqueName: \"kubernetes.io/projected/2354bfd0-6f1e-47c3-b79f-614d201635e8-kube-api-access-2ggvz\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.269676 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-catalog-content\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.269749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-utilities\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.271077 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.371410 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-utilities\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.371475 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggvz\" (UniqueName: \"kubernetes.io/projected/2354bfd0-6f1e-47c3-b79f-614d201635e8-kube-api-access-2ggvz\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.371935 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-catalog-content\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.373088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-utilities\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.373100 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2354bfd0-6f1e-47c3-b79f-614d201635e8-catalog-content\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.391614 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggvz\" (UniqueName: \"kubernetes.io/projected/2354bfd0-6f1e-47c3-b79f-614d201635e8-kube-api-access-2ggvz\") pod \"certified-operators-kmxlp\" (UID: \"2354bfd0-6f1e-47c3-b79f-614d201635e8\") " pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:53 crc kubenswrapper[4861]: I0310 18:55:53.467030 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:53.717221 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtm67"] Mar 10 18:55:55 crc kubenswrapper[4861]: W0310 18:55:53.718138 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dc81a19_65da_4af1_aff0_a0d38434d370.slice/crio-d7ad9c3d1e4c0e8ff943a790446dd13dc2f8d0c49871e6c6e6a92c7a1533ec19 WatchSource:0}: Error finding container d7ad9c3d1e4c0e8ff943a790446dd13dc2f8d0c49871e6c6e6a92c7a1533ec19: Status 404 returned error can't find the container with id d7ad9c3d1e4c0e8ff943a790446dd13dc2f8d0c49871e6c6e6a92c7a1533ec19 Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:53.894988 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmxlp"] Mar 10 18:55:55 crc kubenswrapper[4861]: W0310 18:55:53.932942 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2354bfd0_6f1e_47c3_b79f_614d201635e8.slice/crio-51c29a2f595bc4a215ef29f1fdd0bb9b1ce7776fe454f404496b2a22c57202df WatchSource:0}: Error finding container 51c29a2f595bc4a215ef29f1fdd0bb9b1ce7776fe454f404496b2a22c57202df: Status 404 returned error can't find the container with id 51c29a2f595bc4a215ef29f1fdd0bb9b1ce7776fe454f404496b2a22c57202df Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.262030 4861 generic.go:334] "Generic (PLEG): container finished" podID="9dc81a19-65da-4af1-aff0-a0d38434d370" containerID="970e9afdb0bfbc8d57227695344e389e912dd238f89e57aa8ac7a93432d753d6" exitCode=0 Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.262098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm67" event={"ID":"9dc81a19-65da-4af1-aff0-a0d38434d370","Type":"ContainerDied","Data":"970e9afdb0bfbc8d57227695344e389e912dd238f89e57aa8ac7a93432d753d6"} Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.262124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm67" event={"ID":"9dc81a19-65da-4af1-aff0-a0d38434d370","Type":"ContainerStarted","Data":"d7ad9c3d1e4c0e8ff943a790446dd13dc2f8d0c49871e6c6e6a92c7a1533ec19"} Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.264342 4861 generic.go:334] "Generic (PLEG): container finished" podID="2354bfd0-6f1e-47c3-b79f-614d201635e8" containerID="f6dd9f5e2a85b9713913f8194e1af3117763cb9069e93f9bf4b5f3416a05a78e" exitCode=0 Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.264416 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmxlp" event={"ID":"2354bfd0-6f1e-47c3-b79f-614d201635e8","Type":"ContainerDied","Data":"f6dd9f5e2a85b9713913f8194e1af3117763cb9069e93f9bf4b5f3416a05a78e"} Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:54.264470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmxlp" event={"ID":"2354bfd0-6f1e-47c3-b79f-614d201635e8","Type":"ContainerStarted","Data":"51c29a2f595bc4a215ef29f1fdd0bb9b1ce7776fe454f404496b2a22c57202df"} Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.216463 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" podUID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" containerName="registry" containerID="cri-o://8960113ac5f9a4f46b83051767e3bdb6f82f18a0507b90a9e2292e59743123e5" gracePeriod=30 Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.354474 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.356081 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.359051 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.359676 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.428044 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.428117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kz4g\" (UniqueName: \"kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.428332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.536620 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.537128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kz4g\" (UniqueName: \"kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.537276 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.537540 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.538550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.561789 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qk6mz"] Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.563573 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.569931 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk6mz"] Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.570304 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.580800 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kz4g\" (UniqueName: \"kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g\") pod \"redhat-operators-d6bp2\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.638563 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-utilities\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.739599 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-utilities\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.739659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvb5z\" (UniqueName: \"kubernetes.io/projected/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-kube-api-access-wvb5z\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.739686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-catalog-content\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.740395 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-utilities\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.782319 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.841458 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvb5z\" (UniqueName: \"kubernetes.io/projected/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-kube-api-access-wvb5z\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.841515 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-catalog-content\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.842206 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-catalog-content\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.872286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvb5z\" (UniqueName: \"kubernetes.io/projected/cad2f2f8-05db-4def-ac2f-1bfbde174dfe-kube-api-access-wvb5z\") pod \"community-operators-qk6mz\" (UID: \"cad2f2f8-05db-4def-ac2f-1bfbde174dfe\") " pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:55 crc kubenswrapper[4861]: I0310 18:55:55.917066 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.280639 4861 generic.go:334] "Generic (PLEG): container finished" podID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" containerID="8960113ac5f9a4f46b83051767e3bdb6f82f18a0507b90a9e2292e59743123e5" exitCode=0 Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.280815 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" event={"ID":"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3","Type":"ContainerDied","Data":"8960113ac5f9a4f46b83051767e3bdb6f82f18a0507b90a9e2292e59743123e5"} Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.283984 4861 generic.go:334] "Generic (PLEG): container finished" podID="2354bfd0-6f1e-47c3-b79f-614d201635e8" containerID="d5562b1f5200d5897527dd1bab352613ef6a9d34339f238148b8fa8a81446fef" exitCode=0 Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.284035 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmxlp" event={"ID":"2354bfd0-6f1e-47c3-b79f-614d201635e8","Type":"ContainerDied","Data":"d5562b1f5200d5897527dd1bab352613ef6a9d34339f238148b8fa8a81446fef"} Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.328934 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.391608 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk6mz"] Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455136 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455296 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455344 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455400 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.455427 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p7g9\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9\") pod \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\" (UID: \"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3\") " Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.456094 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.456340 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.462178 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.464484 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.465492 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9" (OuterVolumeSpecName: "kube-api-access-5p7g9") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "kube-api-access-5p7g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.470254 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.482379 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.482597 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" (UID: "1c9f5f8c-64b8-4f10-999c-9cb2f24efef3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.520157 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 18:55:56 crc kubenswrapper[4861]: W0310 18:55:56.527490 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd044ee30_51eb_432a_8d77_dd87673bd190.slice/crio-95df7d91b8f2c169bbfa51fc9d8f7681970030d45e04583ea6dcc029141e6ce8 WatchSource:0}: Error finding container 95df7d91b8f2c169bbfa51fc9d8f7681970030d45e04583ea6dcc029141e6ce8: Status 404 returned error can't find the container with id 95df7d91b8f2c169bbfa51fc9d8f7681970030d45e04583ea6dcc029141e6ce8 Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.556965 4861 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.556996 4861 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.557006 4861 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.557015 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p7g9\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-kube-api-access-5p7g9\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.557025 4861 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.557035 4861 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:56 crc kubenswrapper[4861]: I0310 18:55:56.557043 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.291652 4861 generic.go:334] "Generic (PLEG): container finished" podID="9dc81a19-65da-4af1-aff0-a0d38434d370" containerID="975ad84902c3f63194dcb607d6520bf09c2410578fbef6a77ebeb78a17ea05c0" exitCode=0 Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.291763 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm67" event={"ID":"9dc81a19-65da-4af1-aff0-a0d38434d370","Type":"ContainerDied","Data":"975ad84902c3f63194dcb607d6520bf09c2410578fbef6a77ebeb78a17ea05c0"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.296694 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmxlp" event={"ID":"2354bfd0-6f1e-47c3-b79f-614d201635e8","Type":"ContainerStarted","Data":"a07f8d2132054c1212b863d8115d7ad54083131a95d2ded0f19b54e7f40fc521"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.299382 4861 generic.go:334] "Generic (PLEG): container finished" podID="d044ee30-51eb-432a-8d77-dd87673bd190" containerID="d97b2be5a378a3e0c1ca1eb5f1cfe4e5b25c48432f194c3fea426cfdbcae2edb" exitCode=0 Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.299484 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerDied","Data":"d97b2be5a378a3e0c1ca1eb5f1cfe4e5b25c48432f194c3fea426cfdbcae2edb"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.299526 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerStarted","Data":"95df7d91b8f2c169bbfa51fc9d8f7681970030d45e04583ea6dcc029141e6ce8"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.301255 4861 generic.go:334] "Generic (PLEG): container finished" podID="cad2f2f8-05db-4def-ac2f-1bfbde174dfe" containerID="e12e4571d2786ebcf63171ef7cbd174374ccd6ccdf26bae5ea125dbae44cdafd" exitCode=0 Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.301414 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk6mz" event={"ID":"cad2f2f8-05db-4def-ac2f-1bfbde174dfe","Type":"ContainerDied","Data":"e12e4571d2786ebcf63171ef7cbd174374ccd6ccdf26bae5ea125dbae44cdafd"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.301451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk6mz" event={"ID":"cad2f2f8-05db-4def-ac2f-1bfbde174dfe","Type":"ContainerStarted","Data":"ce9fbbd0ed467d97958ae4306dfe30ff54ed71750e5734f52591e29446399fc8"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.303551 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" event={"ID":"1c9f5f8c-64b8-4f10-999c-9cb2f24efef3","Type":"ContainerDied","Data":"3dadf17fe8031e3fba8c016174fa055eb09887957f26ac211cd9427d9f49920a"} Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.303625 4861 scope.go:117] "RemoveContainer" containerID="8960113ac5f9a4f46b83051767e3bdb6f82f18a0507b90a9e2292e59743123e5" Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.303632 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j4q94" Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.383705 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmxlp" podStartSLOduration=1.865031728 podStartE2EDuration="4.383689281s" podCreationTimestamp="2026-03-10 18:55:53 +0000 UTC" firstStartedPulling="2026-03-10 18:55:54.266584338 +0000 UTC m=+498.030020328" lastFinishedPulling="2026-03-10 18:55:56.785241921 +0000 UTC m=+500.548677881" observedRunningTime="2026-03-10 18:55:57.381251601 +0000 UTC m=+501.144687601" watchObservedRunningTime="2026-03-10 18:55:57.383689281 +0000 UTC m=+501.147125231" Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.395774 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:55:57 crc kubenswrapper[4861]: I0310 18:55:57.401633 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j4q94"] Mar 10 18:55:58 crc kubenswrapper[4861]: I0310 18:55:58.324618 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtm67" event={"ID":"9dc81a19-65da-4af1-aff0-a0d38434d370","Type":"ContainerStarted","Data":"eb6d97b422fdd35d0cee3eeebe8df84e9ae24fa68f45945bc4bca66d5229d34a"} Mar 10 18:55:58 crc kubenswrapper[4861]: I0310 18:55:58.349068 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtm67" podStartSLOduration=2.641612362 podStartE2EDuration="6.349054159s" podCreationTimestamp="2026-03-10 18:55:52 +0000 UTC" firstStartedPulling="2026-03-10 18:55:54.263698894 +0000 UTC m=+498.027134854" lastFinishedPulling="2026-03-10 18:55:57.971140661 +0000 UTC m=+501.734576651" observedRunningTime="2026-03-10 18:55:58.34793613 +0000 UTC m=+502.111372100" watchObservedRunningTime="2026-03-10 18:55:58.349054159 +0000 UTC m=+502.112490119" Mar 10 18:55:58 crc kubenswrapper[4861]: I0310 18:55:58.965756 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" path="/var/lib/kubelet/pods/1c9f5f8c-64b8-4f10-999c-9cb2f24efef3/volumes" Mar 10 18:55:59 crc kubenswrapper[4861]: I0310 18:55:59.334051 4861 generic.go:334] "Generic (PLEG): container finished" podID="d044ee30-51eb-432a-8d77-dd87673bd190" containerID="1389b8cbba33ad8746f9e1f780b0efff75d017e52bd62d630b34d8611edf6900" exitCode=0 Mar 10 18:55:59 crc kubenswrapper[4861]: I0310 18:55:59.334146 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerDied","Data":"1389b8cbba33ad8746f9e1f780b0efff75d017e52bd62d630b34d8611edf6900"} Mar 10 18:55:59 crc kubenswrapper[4861]: I0310 18:55:59.339508 4861 generic.go:334] "Generic (PLEG): container finished" podID="cad2f2f8-05db-4def-ac2f-1bfbde174dfe" containerID="a10038910d1b71707c0863f5044203ae24c1344a9417c040267752f93e90ad99" exitCode=0 Mar 10 18:55:59 crc kubenswrapper[4861]: I0310 18:55:59.339620 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk6mz" event={"ID":"cad2f2f8-05db-4def-ac2f-1bfbde174dfe","Type":"ContainerDied","Data":"a10038910d1b71707c0863f5044203ae24c1344a9417c040267752f93e90ad99"} Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.142824 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552816-vffxm"] Mar 10 18:56:00 crc kubenswrapper[4861]: E0310 18:56:00.143234 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" containerName="registry" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.143350 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" containerName="registry" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.143541 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9f5f8c-64b8-4f10-999c-9cb2f24efef3" containerName="registry" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.144024 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.146882 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.147142 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.148064 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.159670 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552816-vffxm"] Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.308479 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngw6\" (UniqueName: \"kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6\") pod \"auto-csr-approver-29552816-vffxm\" (UID: \"72c38f60-196f-4842-a5d8-cfc90ab46a88\") " pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.358629 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerStarted","Data":"23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b"} Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.361418 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk6mz" event={"ID":"cad2f2f8-05db-4def-ac2f-1bfbde174dfe","Type":"ContainerStarted","Data":"58ea3f8175f3e086c6fa45d7c17b754155601afaccf733c6b8529fc715b950cf"} Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.387344 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d6bp2" podStartSLOduration=2.882659301 podStartE2EDuration="5.387330501s" podCreationTimestamp="2026-03-10 18:55:55 +0000 UTC" firstStartedPulling="2026-03-10 18:55:57.301016696 +0000 UTC m=+501.064452696" lastFinishedPulling="2026-03-10 18:55:59.805687936 +0000 UTC m=+503.569123896" observedRunningTime="2026-03-10 18:56:00.385347105 +0000 UTC m=+504.148783065" watchObservedRunningTime="2026-03-10 18:56:00.387330501 +0000 UTC m=+504.150766461" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.406073 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qk6mz" podStartSLOduration=2.741533426 podStartE2EDuration="5.406048262s" podCreationTimestamp="2026-03-10 18:55:55 +0000 UTC" firstStartedPulling="2026-03-10 18:55:57.302677579 +0000 UTC m=+501.066113539" lastFinishedPulling="2026-03-10 18:55:59.967192375 +0000 UTC m=+503.730628375" observedRunningTime="2026-03-10 18:56:00.399332658 +0000 UTC m=+504.162768628" watchObservedRunningTime="2026-03-10 18:56:00.406048262 +0000 UTC m=+504.169484252" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.410084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngw6\" (UniqueName: \"kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6\") pod \"auto-csr-approver-29552816-vffxm\" (UID: \"72c38f60-196f-4842-a5d8-cfc90ab46a88\") " pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.427967 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngw6\" (UniqueName: \"kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6\") pod \"auto-csr-approver-29552816-vffxm\" (UID: \"72c38f60-196f-4842-a5d8-cfc90ab46a88\") " pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.459060 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:00 crc kubenswrapper[4861]: I0310 18:56:00.867459 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552816-vffxm"] Mar 10 18:56:00 crc kubenswrapper[4861]: W0310 18:56:00.871912 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c38f60_196f_4842_a5d8_cfc90ab46a88.slice/crio-a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e WatchSource:0}: Error finding container a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e: Status 404 returned error can't find the container with id a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e Mar 10 18:56:01 crc kubenswrapper[4861]: I0310 18:56:01.369361 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552816-vffxm" event={"ID":"72c38f60-196f-4842-a5d8-cfc90ab46a88","Type":"ContainerStarted","Data":"a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e"} Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.271585 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.273678 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.338702 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.385287 4861 generic.go:334] "Generic (PLEG): container finished" podID="72c38f60-196f-4842-a5d8-cfc90ab46a88" containerID="bd61826309c968e8088ea8a2436aa9d6215623b896ed4f3c10188818d38a4528" exitCode=0 Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.385401 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552816-vffxm" event={"ID":"72c38f60-196f-4842-a5d8-cfc90ab46a88","Type":"ContainerDied","Data":"bd61826309c968e8088ea8a2436aa9d6215623b896ed4f3c10188818d38a4528"} Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.451682 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtm67" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.467870 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.467912 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:56:03 crc kubenswrapper[4861]: I0310 18:56:03.562064 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:56:04 crc kubenswrapper[4861]: I0310 18:56:04.469698 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kmxlp" Mar 10 18:56:04 crc kubenswrapper[4861]: I0310 18:56:04.835555 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:04 crc kubenswrapper[4861]: I0310 18:56:04.978525 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lngw6\" (UniqueName: \"kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6\") pod \"72c38f60-196f-4842-a5d8-cfc90ab46a88\" (UID: \"72c38f60-196f-4842-a5d8-cfc90ab46a88\") " Mar 10 18:56:04 crc kubenswrapper[4861]: I0310 18:56:04.986397 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6" (OuterVolumeSpecName: "kube-api-access-lngw6") pod "72c38f60-196f-4842-a5d8-cfc90ab46a88" (UID: "72c38f60-196f-4842-a5d8-cfc90ab46a88"). InnerVolumeSpecName "kube-api-access-lngw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.080017 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lngw6\" (UniqueName: \"kubernetes.io/projected/72c38f60-196f-4842-a5d8-cfc90ab46a88-kube-api-access-lngw6\") on node \"crc\" DevicePath \"\"" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.400129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552816-vffxm" event={"ID":"72c38f60-196f-4842-a5d8-cfc90ab46a88","Type":"ContainerDied","Data":"a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e"} Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.400185 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94258008ef497361c8cf2380da90b3fb0e54af24368f9672281a0e819bc2d7e" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.400511 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552816-vffxm" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.783410 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.783472 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.918250 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.918300 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.925007 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552810-wv88x"] Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.928592 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552810-wv88x"] Mar 10 18:56:05 crc kubenswrapper[4861]: I0310 18:56:05.974857 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:56:06 crc kubenswrapper[4861]: I0310 18:56:06.476515 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qk6mz" Mar 10 18:56:06 crc kubenswrapper[4861]: I0310 18:56:06.855155 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d6bp2" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" probeResult="failure" output=< Mar 10 18:56:06 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 18:56:06 crc kubenswrapper[4861]: > Mar 10 18:56:06 crc kubenswrapper[4861]: I0310 18:56:06.967427 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e950a0-6f71-44af-b995-f1ef1be6edbb" path="/var/lib/kubelet/pods/19e950a0-6f71-44af-b995-f1ef1be6edbb/volumes" Mar 10 18:56:15 crc kubenswrapper[4861]: I0310 18:56:15.852981 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:56:15 crc kubenswrapper[4861]: I0310 18:56:15.924835 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 18:56:21 crc kubenswrapper[4861]: I0310 18:56:21.992045 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:56:21 crc kubenswrapper[4861]: I0310 18:56:21.992508 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:56:21 crc kubenswrapper[4861]: I0310 18:56:21.992540 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:56:21 crc kubenswrapper[4861]: I0310 18:56:21.993054 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 18:56:21 crc kubenswrapper[4861]: I0310 18:56:21.993104 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42" gracePeriod=600 Mar 10 18:56:22 crc kubenswrapper[4861]: I0310 18:56:22.498870 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42" exitCode=0 Mar 10 18:56:22 crc kubenswrapper[4861]: I0310 18:56:22.498943 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42"} Mar 10 18:56:22 crc kubenswrapper[4861]: I0310 18:56:22.499432 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517"} Mar 10 18:56:22 crc kubenswrapper[4861]: I0310 18:56:22.499482 4861 scope.go:117] "RemoveContainer" containerID="11c0ae40f0d210a82350ba0ada7a3c9f35595826a4e6f3d5619230238d00111b" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.153291 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552818-2xtjk"] Mar 10 18:58:00 crc kubenswrapper[4861]: E0310 18:58:00.154534 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c38f60-196f-4842-a5d8-cfc90ab46a88" containerName="oc" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.154556 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c38f60-196f-4842-a5d8-cfc90ab46a88" containerName="oc" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.154805 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c38f60-196f-4842-a5d8-cfc90ab46a88" containerName="oc" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.155445 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.160098 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.160236 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.160345 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.165764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552818-2xtjk"] Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.253904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvpj8\" (UniqueName: \"kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8\") pod \"auto-csr-approver-29552818-2xtjk\" (UID: \"1b460b8b-19f4-4c19-a908-24cf3ceda286\") " pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.355512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvpj8\" (UniqueName: \"kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8\") pod \"auto-csr-approver-29552818-2xtjk\" (UID: \"1b460b8b-19f4-4c19-a908-24cf3ceda286\") " pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.389276 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvpj8\" (UniqueName: \"kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8\") pod \"auto-csr-approver-29552818-2xtjk\" (UID: \"1b460b8b-19f4-4c19-a908-24cf3ceda286\") " pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.479869 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.777787 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552818-2xtjk"] Mar 10 18:58:00 crc kubenswrapper[4861]: I0310 18:58:00.787212 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 18:58:01 crc kubenswrapper[4861]: I0310 18:58:01.183776 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" event={"ID":"1b460b8b-19f4-4c19-a908-24cf3ceda286","Type":"ContainerStarted","Data":"8e5166e685051771f6bb5dd463e697aaff30033d4c1e4f6f376af79ca629770f"} Mar 10 18:58:02 crc kubenswrapper[4861]: I0310 18:58:02.192209 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" event={"ID":"1b460b8b-19f4-4c19-a908-24cf3ceda286","Type":"ContainerStarted","Data":"638526d5befcc48a5a0588cdbda3db4c549bdf9d39e06f68c5abd13aff080118"} Mar 10 18:58:02 crc kubenswrapper[4861]: I0310 18:58:02.215441 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" podStartSLOduration=1.207911127 podStartE2EDuration="2.215418718s" podCreationTimestamp="2026-03-10 18:58:00 +0000 UTC" firstStartedPulling="2026-03-10 18:58:00.786790041 +0000 UTC m=+624.550226041" lastFinishedPulling="2026-03-10 18:58:01.794297622 +0000 UTC m=+625.557733632" observedRunningTime="2026-03-10 18:58:02.210756515 +0000 UTC m=+625.974192505" watchObservedRunningTime="2026-03-10 18:58:02.215418718 +0000 UTC m=+625.978854688" Mar 10 18:58:03 crc kubenswrapper[4861]: I0310 18:58:03.203560 4861 generic.go:334] "Generic (PLEG): container finished" podID="1b460b8b-19f4-4c19-a908-24cf3ceda286" containerID="638526d5befcc48a5a0588cdbda3db4c549bdf9d39e06f68c5abd13aff080118" exitCode=0 Mar 10 18:58:03 crc kubenswrapper[4861]: I0310 18:58:03.203674 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" event={"ID":"1b460b8b-19f4-4c19-a908-24cf3ceda286","Type":"ContainerDied","Data":"638526d5befcc48a5a0588cdbda3db4c549bdf9d39e06f68c5abd13aff080118"} Mar 10 18:58:04 crc kubenswrapper[4861]: I0310 18:58:04.547651 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:04 crc kubenswrapper[4861]: I0310 18:58:04.720743 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvpj8\" (UniqueName: \"kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8\") pod \"1b460b8b-19f4-4c19-a908-24cf3ceda286\" (UID: \"1b460b8b-19f4-4c19-a908-24cf3ceda286\") " Mar 10 18:58:04 crc kubenswrapper[4861]: I0310 18:58:04.730364 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8" (OuterVolumeSpecName: "kube-api-access-tvpj8") pod "1b460b8b-19f4-4c19-a908-24cf3ceda286" (UID: "1b460b8b-19f4-4c19-a908-24cf3ceda286"). InnerVolumeSpecName "kube-api-access-tvpj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 18:58:04 crc kubenswrapper[4861]: I0310 18:58:04.822989 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvpj8\" (UniqueName: \"kubernetes.io/projected/1b460b8b-19f4-4c19-a908-24cf3ceda286-kube-api-access-tvpj8\") on node \"crc\" DevicePath \"\"" Mar 10 18:58:05 crc kubenswrapper[4861]: I0310 18:58:05.224567 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" event={"ID":"1b460b8b-19f4-4c19-a908-24cf3ceda286","Type":"ContainerDied","Data":"8e5166e685051771f6bb5dd463e697aaff30033d4c1e4f6f376af79ca629770f"} Mar 10 18:58:05 crc kubenswrapper[4861]: I0310 18:58:05.224640 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5166e685051771f6bb5dd463e697aaff30033d4c1e4f6f376af79ca629770f" Mar 10 18:58:05 crc kubenswrapper[4861]: I0310 18:58:05.224672 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552818-2xtjk" Mar 10 18:58:05 crc kubenswrapper[4861]: I0310 18:58:05.289363 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552812-4d5sp"] Mar 10 18:58:05 crc kubenswrapper[4861]: I0310 18:58:05.296651 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552812-4d5sp"] Mar 10 18:58:06 crc kubenswrapper[4861]: I0310 18:58:06.969654 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97fa138b-1e58-48b3-90ba-2ed750d3f4f1" path="/var/lib/kubelet/pods/97fa138b-1e58-48b3-90ba-2ed750d3f4f1/volumes" Mar 10 18:58:37 crc kubenswrapper[4861]: I0310 18:58:37.408304 4861 scope.go:117] "RemoveContainer" containerID="064d1edf204dd6d1ed8985be957be1f27e42f06d7820fae56f6ca7bd9b80c3e2" Mar 10 18:58:37 crc kubenswrapper[4861]: I0310 18:58:37.474594 4861 scope.go:117] "RemoveContainer" containerID="9cf9e0fb9edc51e0bdf3b5c5aa984d6fd2c69729cd604dbbf10bb1f52a107c9d" Mar 10 18:58:51 crc kubenswrapper[4861]: I0310 18:58:51.991780 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:58:51 crc kubenswrapper[4861]: I0310 18:58:51.992532 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:59:21 crc kubenswrapper[4861]: I0310 18:59:21.991740 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:59:21 crc kubenswrapper[4861]: I0310 18:59:21.992423 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:59:51 crc kubenswrapper[4861]: I0310 18:59:51.992323 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 18:59:51 crc kubenswrapper[4861]: I0310 18:59:51.993102 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 18:59:51 crc kubenswrapper[4861]: I0310 18:59:51.993175 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 18:59:51 crc kubenswrapper[4861]: I0310 18:59:51.994129 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 18:59:51 crc kubenswrapper[4861]: I0310 18:59:51.994234 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517" gracePeriod=600 Mar 10 18:59:53 crc kubenswrapper[4861]: I0310 18:59:53.004544 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517" exitCode=0 Mar 10 18:59:53 crc kubenswrapper[4861]: I0310 18:59:53.004645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517"} Mar 10 18:59:53 crc kubenswrapper[4861]: I0310 18:59:53.005022 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951"} Mar 10 18:59:53 crc kubenswrapper[4861]: I0310 18:59:53.005061 4861 scope.go:117] "RemoveContainer" containerID="0b39b5d2742de1636dbcd1b488c0dbde0e3638a67d061cd6ddececb58fdd6e42" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.144671 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552820-xgtf5"] Mar 10 19:00:00 crc kubenswrapper[4861]: E0310 19:00:00.145536 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b460b8b-19f4-4c19-a908-24cf3ceda286" containerName="oc" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.145554 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b460b8b-19f4-4c19-a908-24cf3ceda286" containerName="oc" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.145690 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b460b8b-19f4-4c19-a908-24cf3ceda286" containerName="oc" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.146142 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.148385 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.149193 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.149416 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.158562 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c"] Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.160008 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.161584 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552820-xgtf5"] Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.167901 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.167912 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.176570 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c"] Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.279578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.279666 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2x9f\" (UniqueName: \"kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.280039 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qmv\" (UniqueName: \"kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv\") pod \"auto-csr-approver-29552820-xgtf5\" (UID: \"abcaa5c6-e4b8-474e-81c9-784dd89f3568\") " pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.280157 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.381248 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2x9f\" (UniqueName: \"kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.381392 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49qmv\" (UniqueName: \"kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv\") pod \"auto-csr-approver-29552820-xgtf5\" (UID: \"abcaa5c6-e4b8-474e-81c9-784dd89f3568\") " pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.381545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.383101 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.384615 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.397601 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.405209 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49qmv\" (UniqueName: \"kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv\") pod \"auto-csr-approver-29552820-xgtf5\" (UID: \"abcaa5c6-e4b8-474e-81c9-784dd89f3568\") " pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.411896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2x9f\" (UniqueName: \"kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f\") pod \"collect-profiles-29552820-ng94c\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.485343 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.496942 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.768928 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552820-xgtf5"] Mar 10 19:00:00 crc kubenswrapper[4861]: I0310 19:00:00.811929 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c"] Mar 10 19:00:00 crc kubenswrapper[4861]: W0310 19:00:00.816778 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7e5d0b_6c31_4cb3_9162_337466c63f12.slice/crio-3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c WatchSource:0}: Error finding container 3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c: Status 404 returned error can't find the container with id 3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c Mar 10 19:00:01 crc kubenswrapper[4861]: I0310 19:00:01.091913 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" event={"ID":"bd7e5d0b-6c31-4cb3-9162-337466c63f12","Type":"ContainerStarted","Data":"c0d336064e9831a1391c24b511f44491c5ac01c912730ce7ba147a185fbd3cd0"} Mar 10 19:00:01 crc kubenswrapper[4861]: I0310 19:00:01.092534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" event={"ID":"bd7e5d0b-6c31-4cb3-9162-337466c63f12","Type":"ContainerStarted","Data":"3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c"} Mar 10 19:00:01 crc kubenswrapper[4861]: I0310 19:00:01.093899 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" event={"ID":"abcaa5c6-e4b8-474e-81c9-784dd89f3568","Type":"ContainerStarted","Data":"51366c51c5208a780ab31ada913d1b804b6f45e7dc6013c443584973560ff320"} Mar 10 19:00:01 crc kubenswrapper[4861]: I0310 19:00:01.121054 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" podStartSLOduration=1.121019154 podStartE2EDuration="1.121019154s" podCreationTimestamp="2026-03-10 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:00:01.112757482 +0000 UTC m=+744.876193462" watchObservedRunningTime="2026-03-10 19:00:01.121019154 +0000 UTC m=+744.884455154" Mar 10 19:00:02 crc kubenswrapper[4861]: I0310 19:00:02.102130 4861 generic.go:334] "Generic (PLEG): container finished" podID="bd7e5d0b-6c31-4cb3-9162-337466c63f12" containerID="c0d336064e9831a1391c24b511f44491c5ac01c912730ce7ba147a185fbd3cd0" exitCode=0 Mar 10 19:00:02 crc kubenswrapper[4861]: I0310 19:00:02.102210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" event={"ID":"bd7e5d0b-6c31-4cb3-9162-337466c63f12","Type":"ContainerDied","Data":"c0d336064e9831a1391c24b511f44491c5ac01c912730ce7ba147a185fbd3cd0"} Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.505900 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.648469 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2x9f\" (UniqueName: \"kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f\") pod \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.648539 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume\") pod \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.648631 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume\") pod \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\" (UID: \"bd7e5d0b-6c31-4cb3-9162-337466c63f12\") " Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.649558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd7e5d0b-6c31-4cb3-9162-337466c63f12" (UID: "bd7e5d0b-6c31-4cb3-9162-337466c63f12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.653508 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd7e5d0b-6c31-4cb3-9162-337466c63f12" (UID: "bd7e5d0b-6c31-4cb3-9162-337466c63f12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.655832 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f" (OuterVolumeSpecName: "kube-api-access-j2x9f") pod "bd7e5d0b-6c31-4cb3-9162-337466c63f12" (UID: "bd7e5d0b-6c31-4cb3-9162-337466c63f12"). InnerVolumeSpecName "kube-api-access-j2x9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.750128 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2x9f\" (UniqueName: \"kubernetes.io/projected/bd7e5d0b-6c31-4cb3-9162-337466c63f12-kube-api-access-j2x9f\") on node \"crc\" DevicePath \"\"" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.750178 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7e5d0b-6c31-4cb3-9162-337466c63f12-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:00:03 crc kubenswrapper[4861]: I0310 19:00:03.750196 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7e5d0b-6c31-4cb3-9162-337466c63f12-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:00:04 crc kubenswrapper[4861]: I0310 19:00:04.119047 4861 generic.go:334] "Generic (PLEG): container finished" podID="abcaa5c6-e4b8-474e-81c9-784dd89f3568" containerID="018765a2efa90214dea88dd37bfb14fd89c01e75371017b4a7e19c2e1ba7124e" exitCode=0 Mar 10 19:00:04 crc kubenswrapper[4861]: I0310 19:00:04.119103 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" event={"ID":"abcaa5c6-e4b8-474e-81c9-784dd89f3568","Type":"ContainerDied","Data":"018765a2efa90214dea88dd37bfb14fd89c01e75371017b4a7e19c2e1ba7124e"} Mar 10 19:00:04 crc kubenswrapper[4861]: I0310 19:00:04.122386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" event={"ID":"bd7e5d0b-6c31-4cb3-9162-337466c63f12","Type":"ContainerDied","Data":"3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c"} Mar 10 19:00:04 crc kubenswrapper[4861]: I0310 19:00:04.122554 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3661a8b096119960aeec83ad6f1b7c06c73a1ec131bf2adb8c35adf2bde7201c" Mar 10 19:00:04 crc kubenswrapper[4861]: I0310 19:00:04.122465 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c" Mar 10 19:00:05 crc kubenswrapper[4861]: I0310 19:00:05.420811 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:05 crc kubenswrapper[4861]: I0310 19:00:05.580454 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49qmv\" (UniqueName: \"kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv\") pod \"abcaa5c6-e4b8-474e-81c9-784dd89f3568\" (UID: \"abcaa5c6-e4b8-474e-81c9-784dd89f3568\") " Mar 10 19:00:05 crc kubenswrapper[4861]: I0310 19:00:05.588994 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv" (OuterVolumeSpecName: "kube-api-access-49qmv") pod "abcaa5c6-e4b8-474e-81c9-784dd89f3568" (UID: "abcaa5c6-e4b8-474e-81c9-784dd89f3568"). InnerVolumeSpecName "kube-api-access-49qmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:00:05 crc kubenswrapper[4861]: I0310 19:00:05.682228 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49qmv\" (UniqueName: \"kubernetes.io/projected/abcaa5c6-e4b8-474e-81c9-784dd89f3568-kube-api-access-49qmv\") on node \"crc\" DevicePath \"\"" Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.138219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" event={"ID":"abcaa5c6-e4b8-474e-81c9-784dd89f3568","Type":"ContainerDied","Data":"51366c51c5208a780ab31ada913d1b804b6f45e7dc6013c443584973560ff320"} Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.138487 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51366c51c5208a780ab31ada913d1b804b6f45e7dc6013c443584973560ff320" Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.138311 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552820-xgtf5" Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.497475 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552814-kg8sv"] Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.527251 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552814-kg8sv"] Mar 10 19:00:06 crc kubenswrapper[4861]: I0310 19:00:06.967192 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad870600-f1d1-4cf9-8c5f-e8389dbb5a81" path="/var/lib/kubelet/pods/ad870600-f1d1-4cf9-8c5f-e8389dbb5a81/volumes" Mar 10 19:00:37 crc kubenswrapper[4861]: I0310 19:00:37.551588 4861 scope.go:117] "RemoveContainer" containerID="4b7158fcfc5c45c4f40f351f272d87ae2c4be5097fcbd6a40488f48ed302d371" Mar 10 19:01:55 crc kubenswrapper[4861]: I0310 19:01:55.152787 4861 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.131170 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552822-4blvc"] Mar 10 19:02:00 crc kubenswrapper[4861]: E0310 19:02:00.131838 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7e5d0b-6c31-4cb3-9162-337466c63f12" containerName="collect-profiles" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.131852 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7e5d0b-6c31-4cb3-9162-337466c63f12" containerName="collect-profiles" Mar 10 19:02:00 crc kubenswrapper[4861]: E0310 19:02:00.131871 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcaa5c6-e4b8-474e-81c9-784dd89f3568" containerName="oc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.131879 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcaa5c6-e4b8-474e-81c9-784dd89f3568" containerName="oc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.131998 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcaa5c6-e4b8-474e-81c9-784dd89f3568" containerName="oc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.132010 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7e5d0b-6c31-4cb3-9162-337466c63f12" containerName="collect-profiles" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.132415 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.135757 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.135789 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.141792 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.146517 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552822-4blvc"] Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.256289 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblgn\" (UniqueName: \"kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn\") pod \"auto-csr-approver-29552822-4blvc\" (UID: \"e63a5345-9a31-4af3-844d-d7766ae8413d\") " pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.358147 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wblgn\" (UniqueName: \"kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn\") pod \"auto-csr-approver-29552822-4blvc\" (UID: \"e63a5345-9a31-4af3-844d-d7766ae8413d\") " pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.379799 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wblgn\" (UniqueName: \"kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn\") pod \"auto-csr-approver-29552822-4blvc\" (UID: \"e63a5345-9a31-4af3-844d-d7766ae8413d\") " pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.488041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.736523 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552822-4blvc"] Mar 10 19:02:00 crc kubenswrapper[4861]: I0310 19:02:00.968210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552822-4blvc" event={"ID":"e63a5345-9a31-4af3-844d-d7766ae8413d","Type":"ContainerStarted","Data":"c4321e23c975db71933335a33e9365637a8314d0e5d9ad565b4bb44f18375109"} Mar 10 19:02:02 crc kubenswrapper[4861]: I0310 19:02:02.979832 4861 generic.go:334] "Generic (PLEG): container finished" podID="e63a5345-9a31-4af3-844d-d7766ae8413d" containerID="1f6ab33cf920175a47ebe2665cdf82ea8479039d71240ea679f36a3b34114d9d" exitCode=0 Mar 10 19:02:02 crc kubenswrapper[4861]: I0310 19:02:02.979935 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552822-4blvc" event={"ID":"e63a5345-9a31-4af3-844d-d7766ae8413d","Type":"ContainerDied","Data":"1f6ab33cf920175a47ebe2665cdf82ea8479039d71240ea679f36a3b34114d9d"} Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.337673 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.526676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wblgn\" (UniqueName: \"kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn\") pod \"e63a5345-9a31-4af3-844d-d7766ae8413d\" (UID: \"e63a5345-9a31-4af3-844d-d7766ae8413d\") " Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.535774 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn" (OuterVolumeSpecName: "kube-api-access-wblgn") pod "e63a5345-9a31-4af3-844d-d7766ae8413d" (UID: "e63a5345-9a31-4af3-844d-d7766ae8413d"). InnerVolumeSpecName "kube-api-access-wblgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.628809 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wblgn\" (UniqueName: \"kubernetes.io/projected/e63a5345-9a31-4af3-844d-d7766ae8413d-kube-api-access-wblgn\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.995010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552822-4blvc" event={"ID":"e63a5345-9a31-4af3-844d-d7766ae8413d","Type":"ContainerDied","Data":"c4321e23c975db71933335a33e9365637a8314d0e5d9ad565b4bb44f18375109"} Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.995384 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4321e23c975db71933335a33e9365637a8314d0e5d9ad565b4bb44f18375109" Mar 10 19:02:04 crc kubenswrapper[4861]: I0310 19:02:04.995078 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552822-4blvc" Mar 10 19:02:05 crc kubenswrapper[4861]: I0310 19:02:05.423138 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552816-vffxm"] Mar 10 19:02:05 crc kubenswrapper[4861]: I0310 19:02:05.429516 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552816-vffxm"] Mar 10 19:02:06 crc kubenswrapper[4861]: I0310 19:02:06.971132 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c38f60-196f-4842-a5d8-cfc90ab46a88" path="/var/lib/kubelet/pods/72c38f60-196f-4842-a5d8-cfc90ab46a88/volumes" Mar 10 19:02:21 crc kubenswrapper[4861]: I0310 19:02:21.991759 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:02:21 crc kubenswrapper[4861]: I0310 19:02:21.992443 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.224141 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s2l62"] Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225283 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-controller" containerID="cri-o://be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225333 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="nbdb" containerID="cri-o://edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225432 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225398 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="sbdb" containerID="cri-o://2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225577 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="northd" containerID="cri-o://e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225660 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-node" containerID="cri-o://22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.225775 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-acl-logging" containerID="cri-o://0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.275834 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" containerID="cri-o://7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" gracePeriod=30 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.449408 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovnkube-controller/3.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.452868 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-acl-logging/0.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.453819 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-controller/0.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454321 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" exitCode=0 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454400 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" exitCode=0 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454411 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" exitCode=0 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454418 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" exitCode=143 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454427 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" exitCode=143 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454419 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454474 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454490 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454579 4861 scope.go:117] "RemoveContainer" containerID="4e8e062dac4eaf569540cc811b67cbe9e8c4a53204ea173373b5cc2e315b87c9" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.454503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.455954 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.459557 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/2.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.460146 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/1.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.460186 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1c251f4-6539-4aa1-8979-47e74495aca3" containerID="3c2c0f3e0f1bba66a24a5ab2d6d29c88d87421fe37c15f8e88503e9cd1a5d065" exitCode=2 Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.460218 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerDied","Data":"3c2c0f3e0f1bba66a24a5ab2d6d29c88d87421fe37c15f8e88503e9cd1a5d065"} Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.461563 4861 scope.go:117] "RemoveContainer" containerID="3c2c0f3e0f1bba66a24a5ab2d6d29c88d87421fe37c15f8e88503e9cd1a5d065" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.539536 4861 scope.go:117] "RemoveContainer" containerID="691c0ec6cd22d6bd4488798ae0a67476744abea7bf6247f7176cad5d37eef07c" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.594087 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-acl-logging/0.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.600183 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-controller/0.log" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.600784 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664138 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kv2r6"] Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664324 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664336 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664343 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664349 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664357 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-acl-logging" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664363 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-acl-logging" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664370 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a5345-9a31-4af3-844d-d7766ae8413d" containerName="oc" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664375 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a5345-9a31-4af3-844d-d7766ae8413d" containerName="oc" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664383 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="sbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664388 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="sbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664398 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-node" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664405 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-node" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664412 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="northd" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664417 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="northd" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664425 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664430 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664438 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664444 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664453 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="nbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664459 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="nbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664467 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kubecfg-setup" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664474 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kubecfg-setup" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664481 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-ovn-metrics" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664487 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-ovn-metrics" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664494 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664500 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: E0310 19:02:24.664509 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664515 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664601 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664610 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="northd" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664619 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664624 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664632 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="sbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664637 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664644 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-node" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664652 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovn-acl-logging" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664659 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="kube-rbac-proxy-ovn-metrics" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664665 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664672 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63a5345-9a31-4af3-844d-d7766ae8413d" containerName="oc" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664679 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="nbdb" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.664846 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" containerName="ovnkube-controller" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.666580 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756500 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756587 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756647 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756678 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756701 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756730 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756741 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756895 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756959 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.756958 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log" (OuterVolumeSpecName: "node-log") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.757097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.757086 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.757430 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.757962 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758002 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758023 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758040 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758025 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758074 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758097 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758134 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwtwj\" (UniqueName: \"kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758150 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758168 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"be820cd7-b3a7-4183-a408-67151247b6ee\" (UID: \"be820cd7-b3a7-4183-a408-67151247b6ee\") " Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758196 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket" (OuterVolumeSpecName: "log-socket") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758486 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash" (OuterVolumeSpecName: "host-slash") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758568 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758773 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-log-socket\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7mc8\" (UniqueName: \"kubernetes.io/projected/f07d9e32-3ef8-497e-ae56-ec2c436006c0-kube-api-access-s7mc8\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.758894 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-kubelet\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759068 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-bin\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-etc-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759113 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759233 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-ovn\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759280 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-script-lib\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759340 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-slash\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-node-log\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759516 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-env-overrides\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759554 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovn-node-metrics-cert\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-config\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759625 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-netns\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-systemd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759697 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759783 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-var-lib-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759812 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-netd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759842 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759871 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-systemd-units\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759958 4861 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-slash\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.759988 4861 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760014 4861 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760041 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760067 4861 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760088 4861 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760111 4861 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760133 4861 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760155 4861 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-node-log\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760178 4861 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760199 4861 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760224 4861 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760245 4861 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760268 4861 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760290 4861 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760313 4861 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be820cd7-b3a7-4183-a408-67151247b6ee-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.760336 4861 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-log-socket\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.763945 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj" (OuterVolumeSpecName: "kube-api-access-fwtwj") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "kube-api-access-fwtwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.764026 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.775951 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "be820cd7-b3a7-4183-a408-67151247b6ee" (UID: "be820cd7-b3a7-4183-a408-67151247b6ee"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-node-log\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovn-node-metrics-cert\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861661 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-env-overrides\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-config\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-netns\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861821 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-systemd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861893 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-var-lib-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-netd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.861981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-systemd-units\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862012 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-log-socket\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862045 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7mc8\" (UniqueName: \"kubernetes.io/projected/f07d9e32-3ef8-497e-ae56-ec2c436006c0-kube-api-access-s7mc8\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862083 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-kubelet\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862111 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-bin\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862208 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-etc-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862261 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-ovn\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862293 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-script-lib\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-slash\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862411 4861 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be820cd7-b3a7-4183-a408-67151247b6ee-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862432 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be820cd7-b3a7-4183-a408-67151247b6ee-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862454 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwtwj\" (UniqueName: \"kubernetes.io/projected/be820cd7-b3a7-4183-a408-67151247b6ee-kube-api-access-fwtwj\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-slash\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.862582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-node-log\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-systemd-units\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-systemd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-netd\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863444 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-var-lib-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863520 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863554 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-log-socket\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863570 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-cni-bin\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-netns\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-kubelet\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863565 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-etc-openvswitch\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863597 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-run-ovn\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.863666 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f07d9e32-3ef8-497e-ae56-ec2c436006c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.864982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-config\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.865013 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-env-overrides\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.865996 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovnkube-script-lib\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.867812 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f07d9e32-3ef8-497e-ae56-ec2c436006c0-ovn-node-metrics-cert\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.894230 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7mc8\" (UniqueName: \"kubernetes.io/projected/f07d9e32-3ef8-497e-ae56-ec2c436006c0-kube-api-access-s7mc8\") pod \"ovnkube-node-kv2r6\" (UID: \"f07d9e32-3ef8-497e-ae56-ec2c436006c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:24 crc kubenswrapper[4861]: I0310 19:02:24.994616 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:25 crc kubenswrapper[4861]: W0310 19:02:25.021654 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf07d9e32_3ef8_497e_ae56_ec2c436006c0.slice/crio-b83af05a26d30a578d8835df84a7f7bd05ec322f2b7dec088583f072bcf50cb3 WatchSource:0}: Error finding container b83af05a26d30a578d8835df84a7f7bd05ec322f2b7dec088583f072bcf50cb3: Status 404 returned error can't find the container with id b83af05a26d30a578d8835df84a7f7bd05ec322f2b7dec088583f072bcf50cb3 Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.469678 4861 generic.go:334] "Generic (PLEG): container finished" podID="f07d9e32-3ef8-497e-ae56-ec2c436006c0" containerID="b041e182f73b01ebc38c819fb3d01e8dc10b976961926432b540c72c8649d6e9" exitCode=0 Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.469774 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerDied","Data":"b041e182f73b01ebc38c819fb3d01e8dc10b976961926432b540c72c8649d6e9"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.470248 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"b83af05a26d30a578d8835df84a7f7bd05ec322f2b7dec088583f072bcf50cb3"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.478594 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-acl-logging/0.log" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479155 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s2l62_be820cd7-b3a7-4183-a408-67151247b6ee/ovn-controller/0.log" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479797 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" exitCode=0 Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479834 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" exitCode=0 Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479851 4861 generic.go:334] "Generic (PLEG): container finished" podID="be820cd7-b3a7-4183-a408-67151247b6ee" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" exitCode=0 Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479927 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479950 4861 scope.go:117] "RemoveContainer" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.479932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.480127 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.480163 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2l62" event={"ID":"be820cd7-b3a7-4183-a408-67151247b6ee","Type":"ContainerDied","Data":"6be1f6974a0888bc87217cfaadcdcaee5620a7a03535fb885811b71fc73b8a58"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.482465 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6lblg_d1c251f4-6539-4aa1-8979-47e74495aca3/kube-multus/2.log" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.482550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6lblg" event={"ID":"d1c251f4-6539-4aa1-8979-47e74495aca3","Type":"ContainerStarted","Data":"8682e476175251f721bc6f3a7c834ed36f10609a0dd3e91c36f794c4a9cb89b9"} Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.539904 4861 scope.go:117] "RemoveContainer" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.571214 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s2l62"] Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.586975 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s2l62"] Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.604019 4861 scope.go:117] "RemoveContainer" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.634786 4861 scope.go:117] "RemoveContainer" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.665217 4861 scope.go:117] "RemoveContainer" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.681561 4861 scope.go:117] "RemoveContainer" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.695850 4861 scope.go:117] "RemoveContainer" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.720778 4861 scope.go:117] "RemoveContainer" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.737150 4861 scope.go:117] "RemoveContainer" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.768651 4861 scope.go:117] "RemoveContainer" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.769351 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": container with ID starting with 7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680 not found: ID does not exist" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.769387 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680"} err="failed to get container status \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": rpc error: code = NotFound desc = could not find container \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": container with ID starting with 7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.769411 4861 scope.go:117] "RemoveContainer" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.769776 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": container with ID starting with 2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20 not found: ID does not exist" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.769826 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20"} err="failed to get container status \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": rpc error: code = NotFound desc = could not find container \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": container with ID starting with 2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.769867 4861 scope.go:117] "RemoveContainer" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.770152 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": container with ID starting with edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773 not found: ID does not exist" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.770175 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773"} err="failed to get container status \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": rpc error: code = NotFound desc = could not find container \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": container with ID starting with edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.770190 4861 scope.go:117] "RemoveContainer" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.770457 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": container with ID starting with e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32 not found: ID does not exist" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.770500 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32"} err="failed to get container status \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": rpc error: code = NotFound desc = could not find container \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": container with ID starting with e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.770526 4861 scope.go:117] "RemoveContainer" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.771048 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": container with ID starting with e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2 not found: ID does not exist" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771083 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2"} err="failed to get container status \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": rpc error: code = NotFound desc = could not find container \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": container with ID starting with e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771123 4861 scope.go:117] "RemoveContainer" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.771390 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": container with ID starting with 22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf not found: ID does not exist" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771415 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf"} err="failed to get container status \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": rpc error: code = NotFound desc = could not find container \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": container with ID starting with 22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771430 4861 scope.go:117] "RemoveContainer" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.771935 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": container with ID starting with 0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c not found: ID does not exist" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771959 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c"} err="failed to get container status \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": rpc error: code = NotFound desc = could not find container \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": container with ID starting with 0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.771974 4861 scope.go:117] "RemoveContainer" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.772288 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": container with ID starting with be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97 not found: ID does not exist" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.772330 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97"} err="failed to get container status \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": rpc error: code = NotFound desc = could not find container \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": container with ID starting with be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.772358 4861 scope.go:117] "RemoveContainer" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" Mar 10 19:02:25 crc kubenswrapper[4861]: E0310 19:02:25.772672 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": container with ID starting with 09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656 not found: ID does not exist" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.772695 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656"} err="failed to get container status \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": rpc error: code = NotFound desc = could not find container \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": container with ID starting with 09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.772758 4861 scope.go:117] "RemoveContainer" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773140 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680"} err="failed to get container status \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": rpc error: code = NotFound desc = could not find container \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": container with ID starting with 7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773174 4861 scope.go:117] "RemoveContainer" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773481 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20"} err="failed to get container status \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": rpc error: code = NotFound desc = could not find container \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": container with ID starting with 2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773503 4861 scope.go:117] "RemoveContainer" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773931 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773"} err="failed to get container status \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": rpc error: code = NotFound desc = could not find container \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": container with ID starting with edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.773948 4861 scope.go:117] "RemoveContainer" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774231 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32"} err="failed to get container status \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": rpc error: code = NotFound desc = could not find container \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": container with ID starting with e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774248 4861 scope.go:117] "RemoveContainer" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774468 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2"} err="failed to get container status \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": rpc error: code = NotFound desc = could not find container \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": container with ID starting with e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774486 4861 scope.go:117] "RemoveContainer" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774667 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf"} err="failed to get container status \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": rpc error: code = NotFound desc = could not find container \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": container with ID starting with 22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774683 4861 scope.go:117] "RemoveContainer" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.774997 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c"} err="failed to get container status \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": rpc error: code = NotFound desc = could not find container \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": container with ID starting with 0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775020 4861 scope.go:117] "RemoveContainer" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775177 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97"} err="failed to get container status \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": rpc error: code = NotFound desc = could not find container \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": container with ID starting with be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775194 4861 scope.go:117] "RemoveContainer" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775482 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656"} err="failed to get container status \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": rpc error: code = NotFound desc = could not find container \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": container with ID starting with 09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775499 4861 scope.go:117] "RemoveContainer" containerID="7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775777 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680"} err="failed to get container status \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": rpc error: code = NotFound desc = could not find container \"7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680\": container with ID starting with 7688ca8e6f3f8d545481d6e95c6fc69b9e19d6eea68571a52955cdeae5bbc680 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.775793 4861 scope.go:117] "RemoveContainer" containerID="2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776343 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20"} err="failed to get container status \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": rpc error: code = NotFound desc = could not find container \"2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20\": container with ID starting with 2cf2a1cfc3438718bb50a53445443bc0251f2cc83f903fca1cccd1048c8f1c20 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776360 4861 scope.go:117] "RemoveContainer" containerID="edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776547 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773"} err="failed to get container status \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": rpc error: code = NotFound desc = could not find container \"edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773\": container with ID starting with edf854dc22368e4b8f76e0111b30784f22832fc69661b4db8d2c9c33aa553773 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776562 4861 scope.go:117] "RemoveContainer" containerID="e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776856 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32"} err="failed to get container status \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": rpc error: code = NotFound desc = could not find container \"e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32\": container with ID starting with e44e4837f8dba12dfb18ef2200a19f696222668eb9b10819d1c3b442a28f5e32 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.776872 4861 scope.go:117] "RemoveContainer" containerID="e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777077 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2"} err="failed to get container status \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": rpc error: code = NotFound desc = could not find container \"e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2\": container with ID starting with e7bd3993ae4ddabc6c06a91127afc341760a07401ce3a409612824c0045bb6f2 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777095 4861 scope.go:117] "RemoveContainer" containerID="22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777349 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf"} err="failed to get container status \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": rpc error: code = NotFound desc = could not find container \"22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf\": container with ID starting with 22c9526135e4d6c3ef5cdf25b06f60556a876ace6c81593534be08cfd6a54cdf not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777367 4861 scope.go:117] "RemoveContainer" containerID="0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777548 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c"} err="failed to get container status \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": rpc error: code = NotFound desc = could not find container \"0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c\": container with ID starting with 0ae3cd6b9ef5ede85a70dd7bcf4eb260cc357bfceeb571904e68788eaba0709c not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777566 4861 scope.go:117] "RemoveContainer" containerID="be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777860 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97"} err="failed to get container status \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": rpc error: code = NotFound desc = could not find container \"be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97\": container with ID starting with be4f9c8096f4981a65522e8ee451980e580153c1f5c65c736655fb94593dbd97 not found: ID does not exist" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.777876 4861 scope.go:117] "RemoveContainer" containerID="09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656" Mar 10 19:02:25 crc kubenswrapper[4861]: I0310 19:02:25.778043 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656"} err="failed to get container status \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": rpc error: code = NotFound desc = could not find container \"09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656\": container with ID starting with 09c620ba70d91a84cf6910e413d790eee8d6427dec9a39be3b706400fcaab656 not found: ID does not exist" Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492070 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"faf536500fd80821be6a6fa8661ab25606800498413816935f8c610938bda4d9"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492698 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"3d2d1568f6701ae09816d6c2e5872e9975e9a0ae671bf4cac1211051a5b3debf"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492745 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"c0227f96d92a55866fbe6f7198930361c7f42822a89bb9129090e800539f1378"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"a57529c1db5b829394d0e4ec1f30eef8971dd1bf89ede8fcc20b1322d46690ef"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492779 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"2431923627c927e581ce8070dc863924bb7f6b12e17e385a317ab51d814a8c42"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.492796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"e775a5d78fc72deecd72728e590d568f7d3d5043d15a709a3c9ec4d6554787c5"} Mar 10 19:02:26 crc kubenswrapper[4861]: I0310 19:02:26.970240 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be820cd7-b3a7-4183-a408-67151247b6ee" path="/var/lib/kubelet/pods/be820cd7-b3a7-4183-a408-67151247b6ee/volumes" Mar 10 19:02:28 crc kubenswrapper[4861]: I0310 19:02:28.515919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"a4546a62fe67fba06bdd9e678f28ab5f218cf4fb5dcfefb8f7edac7481799503"} Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.230698 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jptgj"] Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.231437 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.233629 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.233883 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.237333 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.237535 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2l2g6" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.349244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.349308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zlv\" (UniqueName: \"kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.349417 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.450890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.450961 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6zlv\" (UniqueName: \"kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.451089 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.451482 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.453129 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.487068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6zlv\" (UniqueName: \"kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv\") pod \"crc-storage-crc-jptgj\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: I0310 19:02:30.547854 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: E0310 19:02:30.594103 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(c7d2b2cc5f97d5eb66a29828cfbdea07ddbe59d3f548ca144cf2f9f698a271fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 19:02:30 crc kubenswrapper[4861]: E0310 19:02:30.594551 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(c7d2b2cc5f97d5eb66a29828cfbdea07ddbe59d3f548ca144cf2f9f698a271fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: E0310 19:02:30.594585 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(c7d2b2cc5f97d5eb66a29828cfbdea07ddbe59d3f548ca144cf2f9f698a271fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:30 crc kubenswrapper[4861]: E0310 19:02:30.594654 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jptgj_crc-storage(0a196210-c92a-4c67-9441-9ffa06be32be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jptgj_crc-storage(0a196210-c92a-4c67-9441-9ffa06be32be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(c7d2b2cc5f97d5eb66a29828cfbdea07ddbe59d3f548ca144cf2f9f698a271fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jptgj" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.543090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" event={"ID":"f07d9e32-3ef8-497e-ae56-ec2c436006c0","Type":"ContainerStarted","Data":"5e530f632cb96230c1b56a15182a9e7804940f7d9af4b13c4cae2a632b7765fa"} Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.543777 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.544058 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.544279 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.592970 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.611956 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" podStartSLOduration=7.61191808 podStartE2EDuration="7.61191808s" podCreationTimestamp="2026-03-10 19:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:02:31.599363964 +0000 UTC m=+895.362799974" watchObservedRunningTime="2026-03-10 19:02:31.61191808 +0000 UTC m=+895.375354080" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.616336 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.889115 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.892011 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.973144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.973218 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:31 crc kubenswrapper[4861]: I0310 19:02:31.973280 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6p8b\" (UniqueName: \"kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.074828 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6p8b\" (UniqueName: \"kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.074927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.074969 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.075460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.075779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.117155 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6p8b\" (UniqueName: \"kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b\") pod \"certified-operators-2ghqs\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.220753 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.249801 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(c7b5a64f99675efd04bdaa3113aee372675d945a228771ffff6c16037b310fbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.249873 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(c7b5a64f99675efd04bdaa3113aee372675d945a228771ffff6c16037b310fbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.249898 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(c7b5a64f99675efd04bdaa3113aee372675d945a228771ffff6c16037b310fbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.249950 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-2ghqs_openshift-marketplace(636f19eb-54f2-4d6d-b96e-5d8f8a009f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-2ghqs_openshift-marketplace(636f19eb-54f2-4d6d-b96e-5d8f8a009f1e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(c7b5a64f99675efd04bdaa3113aee372675d945a228771ffff6c16037b310fbe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-2ghqs" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.923315 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jptgj"] Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.923475 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.924100 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.940104 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.940229 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: I0310 19:02:32.940698 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.966515 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(5215d853e6dd6bc0dc26ed8bd52e455320659af6ab50a49e291c47e9c2200b85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.966560 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(5215d853e6dd6bc0dc26ed8bd52e455320659af6ab50a49e291c47e9c2200b85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.966582 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(5215d853e6dd6bc0dc26ed8bd52e455320659af6ab50a49e291c47e9c2200b85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.966626 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jptgj_crc-storage(0a196210-c92a-4c67-9441-9ffa06be32be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jptgj_crc-storage(0a196210-c92a-4c67-9441-9ffa06be32be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jptgj_crc-storage_0a196210-c92a-4c67-9441-9ffa06be32be_0(5215d853e6dd6bc0dc26ed8bd52e455320659af6ab50a49e291c47e9c2200b85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jptgj" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.980586 4861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(cdb85e570d5e4031940d980a413d5e89250161eac4d2330bc4bd6308f9de77ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.980664 4861 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(cdb85e570d5e4031940d980a413d5e89250161eac4d2330bc4bd6308f9de77ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.980699 4861 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(cdb85e570d5e4031940d980a413d5e89250161eac4d2330bc4bd6308f9de77ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:32 crc kubenswrapper[4861]: E0310 19:02:32.980769 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-2ghqs_openshift-marketplace(636f19eb-54f2-4d6d-b96e-5d8f8a009f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-2ghqs_openshift-marketplace(636f19eb-54f2-4d6d-b96e-5d8f8a009f1e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-2ghqs_openshift-marketplace_636f19eb-54f2-4d6d-b96e-5d8f8a009f1e_0(cdb85e570d5e4031940d980a413d5e89250161eac4d2330bc4bd6308f9de77ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-2ghqs" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" Mar 10 19:02:37 crc kubenswrapper[4861]: I0310 19:02:37.636017 4861 scope.go:117] "RemoveContainer" containerID="bd61826309c968e8088ea8a2436aa9d6215623b896ed4f3c10188818d38a4528" Mar 10 19:02:44 crc kubenswrapper[4861]: I0310 19:02:44.957376 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:44 crc kubenswrapper[4861]: I0310 19:02:44.957409 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:44 crc kubenswrapper[4861]: I0310 19:02:44.958695 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:44 crc kubenswrapper[4861]: I0310 19:02:44.958904 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.344109 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:45 crc kubenswrapper[4861]: W0310 19:02:45.346961 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636f19eb_54f2_4d6d_b96e_5d8f8a009f1e.slice/crio-57e7f7218e35054c2b8227c29354de16189eb1b9543d37f74fc7a99ec2857712 WatchSource:0}: Error finding container 57e7f7218e35054c2b8227c29354de16189eb1b9543d37f74fc7a99ec2857712: Status 404 returned error can't find the container with id 57e7f7218e35054c2b8227c29354de16189eb1b9543d37f74fc7a99ec2857712 Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.484305 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jptgj"] Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.643954 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jptgj" event={"ID":"0a196210-c92a-4c67-9441-9ffa06be32be","Type":"ContainerStarted","Data":"c399a0272f867b858f8cc77ef9bac165a1baf85b347e962dcfaeb26f259fdc4c"} Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.647155 4861 generic.go:334] "Generic (PLEG): container finished" podID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerID="352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f" exitCode=0 Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.647259 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerDied","Data":"352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f"} Mar 10 19:02:45 crc kubenswrapper[4861]: I0310 19:02:45.647406 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerStarted","Data":"57e7f7218e35054c2b8227c29354de16189eb1b9543d37f74fc7a99ec2857712"} Mar 10 19:02:47 crc kubenswrapper[4861]: I0310 19:02:47.664201 4861 generic.go:334] "Generic (PLEG): container finished" podID="0a196210-c92a-4c67-9441-9ffa06be32be" containerID="79b66d70f838551cd15f4213fcdae99951625b0358a9c05f9e04cfe1faf7dc08" exitCode=0 Mar 10 19:02:47 crc kubenswrapper[4861]: I0310 19:02:47.664323 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jptgj" event={"ID":"0a196210-c92a-4c67-9441-9ffa06be32be","Type":"ContainerDied","Data":"79b66d70f838551cd15f4213fcdae99951625b0358a9c05f9e04cfe1faf7dc08"} Mar 10 19:02:47 crc kubenswrapper[4861]: I0310 19:02:47.667890 4861 generic.go:334] "Generic (PLEG): container finished" podID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerID="ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff" exitCode=0 Mar 10 19:02:47 crc kubenswrapper[4861]: I0310 19:02:47.667963 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerDied","Data":"ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff"} Mar 10 19:02:48 crc kubenswrapper[4861]: I0310 19:02:48.678527 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerStarted","Data":"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba"} Mar 10 19:02:48 crc kubenswrapper[4861]: I0310 19:02:48.713590 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ghqs" podStartSLOduration=15.327742481 podStartE2EDuration="17.713558434s" podCreationTimestamp="2026-03-10 19:02:31 +0000 UTC" firstStartedPulling="2026-03-10 19:02:45.648958306 +0000 UTC m=+909.412394256" lastFinishedPulling="2026-03-10 19:02:48.034774249 +0000 UTC m=+911.798210209" observedRunningTime="2026-03-10 19:02:48.71001257 +0000 UTC m=+912.473448600" watchObservedRunningTime="2026-03-10 19:02:48.713558434 +0000 UTC m=+912.476994444" Mar 10 19:02:48 crc kubenswrapper[4861]: I0310 19:02:48.955513 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.080790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6zlv\" (UniqueName: \"kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv\") pod \"0a196210-c92a-4c67-9441-9ffa06be32be\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.081074 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt\") pod \"0a196210-c92a-4c67-9441-9ffa06be32be\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.081275 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage\") pod \"0a196210-c92a-4c67-9441-9ffa06be32be\" (UID: \"0a196210-c92a-4c67-9441-9ffa06be32be\") " Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.081736 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "0a196210-c92a-4c67-9441-9ffa06be32be" (UID: "0a196210-c92a-4c67-9441-9ffa06be32be"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.094933 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv" (OuterVolumeSpecName: "kube-api-access-p6zlv") pod "0a196210-c92a-4c67-9441-9ffa06be32be" (UID: "0a196210-c92a-4c67-9441-9ffa06be32be"). InnerVolumeSpecName "kube-api-access-p6zlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.097318 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "0a196210-c92a-4c67-9441-9ffa06be32be" (UID: "0a196210-c92a-4c67-9441-9ffa06be32be"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.182242 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a196210-c92a-4c67-9441-9ffa06be32be-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.182273 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a196210-c92a-4c67-9441-9ffa06be32be-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.182283 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6zlv\" (UniqueName: \"kubernetes.io/projected/0a196210-c92a-4c67-9441-9ffa06be32be-kube-api-access-p6zlv\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.688228 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jptgj" event={"ID":"0a196210-c92a-4c67-9441-9ffa06be32be","Type":"ContainerDied","Data":"c399a0272f867b858f8cc77ef9bac165a1baf85b347e962dcfaeb26f259fdc4c"} Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.688263 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jptgj" Mar 10 19:02:49 crc kubenswrapper[4861]: I0310 19:02:49.688288 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c399a0272f867b858f8cc77ef9bac165a1baf85b347e962dcfaeb26f259fdc4c" Mar 10 19:02:51 crc kubenswrapper[4861]: I0310 19:02:51.991893 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:02:51 crc kubenswrapper[4861]: I0310 19:02:51.992307 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:02:52 crc kubenswrapper[4861]: I0310 19:02:52.221896 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:52 crc kubenswrapper[4861]: I0310 19:02:52.222334 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:52 crc kubenswrapper[4861]: I0310 19:02:52.299428 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:53 crc kubenswrapper[4861]: I0310 19:02:53.786562 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:53 crc kubenswrapper[4861]: I0310 19:02:53.848126 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:55 crc kubenswrapper[4861]: I0310 19:02:55.027567 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kv2r6" Mar 10 19:02:55 crc kubenswrapper[4861]: I0310 19:02:55.725770 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ghqs" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="registry-server" containerID="cri-o://456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba" gracePeriod=2 Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.072335 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.187068 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities\") pod \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.187207 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content\") pod \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.187243 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6p8b\" (UniqueName: \"kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b\") pod \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\" (UID: \"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e\") " Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.187911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities" (OuterVolumeSpecName: "utilities") pod "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" (UID: "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.206870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b" (OuterVolumeSpecName: "kube-api-access-h6p8b") pod "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" (UID: "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e"). InnerVolumeSpecName "kube-api-access-h6p8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.261809 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" (UID: "636f19eb-54f2-4d6d-b96e-5d8f8a009f1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.288179 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.288224 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6p8b\" (UniqueName: \"kubernetes.io/projected/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-kube-api-access-h6p8b\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.288235 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.733497 4861 generic.go:334] "Generic (PLEG): container finished" podID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerID="456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba" exitCode=0 Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.733537 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerDied","Data":"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba"} Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.733596 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ghqs" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.733620 4861 scope.go:117] "RemoveContainer" containerID="456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.733601 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ghqs" event={"ID":"636f19eb-54f2-4d6d-b96e-5d8f8a009f1e","Type":"ContainerDied","Data":"57e7f7218e35054c2b8227c29354de16189eb1b9543d37f74fc7a99ec2857712"} Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.749888 4861 scope.go:117] "RemoveContainer" containerID="ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.765256 4861 scope.go:117] "RemoveContainer" containerID="352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.790999 4861 scope.go:117] "RemoveContainer" containerID="456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.791161 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.791779 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ghqs"] Mar 10 19:02:56 crc kubenswrapper[4861]: E0310 19:02:56.791892 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba\": container with ID starting with 456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba not found: ID does not exist" containerID="456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.791941 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba"} err="failed to get container status \"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba\": rpc error: code = NotFound desc = could not find container \"456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba\": container with ID starting with 456f5e74dfb34856355cb9d5465d0dfb36cf0b180c365733a77bc5b01d659aba not found: ID does not exist" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.791970 4861 scope.go:117] "RemoveContainer" containerID="ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff" Mar 10 19:02:56 crc kubenswrapper[4861]: E0310 19:02:56.792296 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff\": container with ID starting with ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff not found: ID does not exist" containerID="ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.792329 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff"} err="failed to get container status \"ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff\": rpc error: code = NotFound desc = could not find container \"ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff\": container with ID starting with ff215dbc791fab61ad359d8f90dff91199595b6fe81e2c0259c0abc3364c1bff not found: ID does not exist" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.792355 4861 scope.go:117] "RemoveContainer" containerID="352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f" Mar 10 19:02:56 crc kubenswrapper[4861]: E0310 19:02:56.792589 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f\": container with ID starting with 352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f not found: ID does not exist" containerID="352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.792613 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f"} err="failed to get container status \"352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f\": rpc error: code = NotFound desc = could not find container \"352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f\": container with ID starting with 352130b80be8a4754da505cdf396bc030176cde364df8256d34e79b6c14ad49f not found: ID does not exist" Mar 10 19:02:56 crc kubenswrapper[4861]: I0310 19:02:56.967820 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" path="/var/lib/kubelet/pods/636f19eb-54f2-4d6d-b96e-5d8f8a009f1e/volumes" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693197 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5"] Mar 10 19:02:57 crc kubenswrapper[4861]: E0310 19:02:57.693501 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="registry-server" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693521 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="registry-server" Mar 10 19:02:57 crc kubenswrapper[4861]: E0310 19:02:57.693544 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" containerName="storage" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693556 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" containerName="storage" Mar 10 19:02:57 crc kubenswrapper[4861]: E0310 19:02:57.693571 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="extract-utilities" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693584 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="extract-utilities" Mar 10 19:02:57 crc kubenswrapper[4861]: E0310 19:02:57.693609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="extract-content" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693621 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="extract-content" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693827 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" containerName="storage" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.693845 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="636f19eb-54f2-4d6d-b96e-5d8f8a009f1e" containerName="registry-server" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.694978 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.698545 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.711543 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5"] Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.803755 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.803827 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvj8l\" (UniqueName: \"kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.803926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.905179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.905594 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.905624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvj8l\" (UniqueName: \"kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.905978 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.906304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:57 crc kubenswrapper[4861]: I0310 19:02:57.942000 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvj8l\" (UniqueName: \"kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:58 crc kubenswrapper[4861]: I0310 19:02:58.060105 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:02:58 crc kubenswrapper[4861]: I0310 19:02:58.353135 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5"] Mar 10 19:02:58 crc kubenswrapper[4861]: W0310 19:02:58.363826 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49257ab5_4d09_4d8b_8f27_e42873f65b4d.slice/crio-b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577 WatchSource:0}: Error finding container b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577: Status 404 returned error can't find the container with id b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577 Mar 10 19:02:58 crc kubenswrapper[4861]: I0310 19:02:58.763203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerStarted","Data":"ff175de5ab2c040671500c06491df11261c35441cfce81029b7554389c4046be"} Mar 10 19:02:58 crc kubenswrapper[4861]: I0310 19:02:58.763617 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerStarted","Data":"b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577"} Mar 10 19:02:59 crc kubenswrapper[4861]: I0310 19:02:59.773639 4861 generic.go:334] "Generic (PLEG): container finished" podID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerID="ff175de5ab2c040671500c06491df11261c35441cfce81029b7554389c4046be" exitCode=0 Mar 10 19:02:59 crc kubenswrapper[4861]: I0310 19:02:59.773771 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerDied","Data":"ff175de5ab2c040671500c06491df11261c35441cfce81029b7554389c4046be"} Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.643507 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.645848 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.661276 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.844431 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.845475 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.845659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8x4\" (UniqueName: \"kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.946592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv8x4\" (UniqueName: \"kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.946959 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.947013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.947543 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.947594 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.966631 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv8x4\" (UniqueName: \"kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4\") pod \"redhat-operators-8v9ps\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:00 crc kubenswrapper[4861]: I0310 19:03:00.979022 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.169318 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:01 crc kubenswrapper[4861]: W0310 19:03:01.229284 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3df69b3_b943_423c_8317_147c63c7e1c5.slice/crio-e7abfecdf4ae6e8ed7bfdc181771d171ed334492425b8e14a75ac64fc31a9931 WatchSource:0}: Error finding container e7abfecdf4ae6e8ed7bfdc181771d171ed334492425b8e14a75ac64fc31a9931: Status 404 returned error can't find the container with id e7abfecdf4ae6e8ed7bfdc181771d171ed334492425b8e14a75ac64fc31a9931 Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.786620 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerID="f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463" exitCode=0 Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.786699 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerDied","Data":"f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463"} Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.786750 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerStarted","Data":"e7abfecdf4ae6e8ed7bfdc181771d171ed334492425b8e14a75ac64fc31a9931"} Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.789642 4861 generic.go:334] "Generic (PLEG): container finished" podID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerID="1a795760e9b4d107a0b1bf83ba5db9137438f45a8a4adce3511e48f00a866ef4" exitCode=0 Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.789692 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:03:01 crc kubenswrapper[4861]: I0310 19:03:01.789701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerDied","Data":"1a795760e9b4d107a0b1bf83ba5db9137438f45a8a4adce3511e48f00a866ef4"} Mar 10 19:03:02 crc kubenswrapper[4861]: I0310 19:03:02.800484 4861 generic.go:334] "Generic (PLEG): container finished" podID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerID="552b8c82da691b4329895293e1ab088e956994d9e985eb11d4e77a75b20ad2f2" exitCode=0 Mar 10 19:03:02 crc kubenswrapper[4861]: I0310 19:03:02.800570 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerDied","Data":"552b8c82da691b4329895293e1ab088e956994d9e985eb11d4e77a75b20ad2f2"} Mar 10 19:03:02 crc kubenswrapper[4861]: I0310 19:03:02.805165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerStarted","Data":"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6"} Mar 10 19:03:03 crc kubenswrapper[4861]: I0310 19:03:03.814335 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerID="1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6" exitCode=0 Mar 10 19:03:03 crc kubenswrapper[4861]: I0310 19:03:03.814444 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerDied","Data":"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6"} Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.138953 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.293465 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle\") pod \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.294099 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util\") pod \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.294437 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvj8l\" (UniqueName: \"kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l\") pod \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\" (UID: \"49257ab5-4d09-4d8b-8f27-e42873f65b4d\") " Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.294817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle" (OuterVolumeSpecName: "bundle") pod "49257ab5-4d09-4d8b-8f27-e42873f65b4d" (UID: "49257ab5-4d09-4d8b-8f27-e42873f65b4d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.295330 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.302542 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l" (OuterVolumeSpecName: "kube-api-access-tvj8l") pod "49257ab5-4d09-4d8b-8f27-e42873f65b4d" (UID: "49257ab5-4d09-4d8b-8f27-e42873f65b4d"). InnerVolumeSpecName "kube-api-access-tvj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.311963 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util" (OuterVolumeSpecName: "util") pod "49257ab5-4d09-4d8b-8f27-e42873f65b4d" (UID: "49257ab5-4d09-4d8b-8f27-e42873f65b4d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.396370 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvj8l\" (UniqueName: \"kubernetes.io/projected/49257ab5-4d09-4d8b-8f27-e42873f65b4d-kube-api-access-tvj8l\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.396420 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/49257ab5-4d09-4d8b-8f27-e42873f65b4d-util\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.825928 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerStarted","Data":"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d"} Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.830464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" event={"ID":"49257ab5-4d09-4d8b-8f27-e42873f65b4d","Type":"ContainerDied","Data":"b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577"} Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.830520 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6628cf42d97348b38c82875c7a4d0effebbc88d534c3f74f4bdd1abb573c577" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.830574 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5" Mar 10 19:03:04 crc kubenswrapper[4861]: I0310 19:03:04.850862 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8v9ps" podStartSLOduration=2.312127457 podStartE2EDuration="4.85083325s" podCreationTimestamp="2026-03-10 19:03:00 +0000 UTC" firstStartedPulling="2026-03-10 19:03:01.789195748 +0000 UTC m=+925.552631718" lastFinishedPulling="2026-03-10 19:03:04.327901521 +0000 UTC m=+928.091337511" observedRunningTime="2026-03-10 19:03:04.846582736 +0000 UTC m=+928.610018746" watchObservedRunningTime="2026-03-10 19:03:04.85083325 +0000 UTC m=+928.614269240" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.274567 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85"] Mar 10 19:03:08 crc kubenswrapper[4861]: E0310 19:03:08.274969 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="util" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.274980 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="util" Mar 10 19:03:08 crc kubenswrapper[4861]: E0310 19:03:08.274990 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="pull" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.274995 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="pull" Mar 10 19:03:08 crc kubenswrapper[4861]: E0310 19:03:08.275009 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="extract" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.275016 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="extract" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.275097 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="49257ab5-4d09-4d8b-8f27-e42873f65b4d" containerName="extract" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.275413 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.278126 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-x2542" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.278145 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.278481 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.304070 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85"] Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.433302 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.434250 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.439259 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.450664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4ddd\" (UniqueName: \"kubernetes.io/projected/ecb4ad72-c43c-4a90-8808-92ca31191f2c-kube-api-access-p4ddd\") pod \"nmstate-operator-75c5dccd6c-mxv85\" (UID: \"ecb4ad72-c43c-4a90-8808-92ca31191f2c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.552169 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rhq\" (UniqueName: \"kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.552346 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.552450 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.552492 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4ddd\" (UniqueName: \"kubernetes.io/projected/ecb4ad72-c43c-4a90-8808-92ca31191f2c-kube-api-access-p4ddd\") pod \"nmstate-operator-75c5dccd6c-mxv85\" (UID: \"ecb4ad72-c43c-4a90-8808-92ca31191f2c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.582895 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4ddd\" (UniqueName: \"kubernetes.io/projected/ecb4ad72-c43c-4a90-8808-92ca31191f2c-kube-api-access-p4ddd\") pod \"nmstate-operator-75c5dccd6c-mxv85\" (UID: \"ecb4ad72-c43c-4a90-8808-92ca31191f2c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.599848 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.672318 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rhq\" (UniqueName: \"kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.672735 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.672815 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.673583 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.673590 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.695696 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rhq\" (UniqueName: \"kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq\") pod \"redhat-marketplace-vcbcm\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.745729 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.828001 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85"] Mar 10 19:03:08 crc kubenswrapper[4861]: W0310 19:03:08.838456 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb4ad72_c43c_4a90_8808_92ca31191f2c.slice/crio-4cefb65bf903fe643ac7054b76dd5cfbc73dff9e4b0b49d77ec0ff3a98a1b606 WatchSource:0}: Error finding container 4cefb65bf903fe643ac7054b76dd5cfbc73dff9e4b0b49d77ec0ff3a98a1b606: Status 404 returned error can't find the container with id 4cefb65bf903fe643ac7054b76dd5cfbc73dff9e4b0b49d77ec0ff3a98a1b606 Mar 10 19:03:08 crc kubenswrapper[4861]: I0310 19:03:08.865805 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" event={"ID":"ecb4ad72-c43c-4a90-8808-92ca31191f2c","Type":"ContainerStarted","Data":"4cefb65bf903fe643ac7054b76dd5cfbc73dff9e4b0b49d77ec0ff3a98a1b606"} Mar 10 19:03:09 crc kubenswrapper[4861]: I0310 19:03:09.163826 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:09 crc kubenswrapper[4861]: W0310 19:03:09.176116 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81e5c465_c161_4d8f_8f21_508b7e56cd95.slice/crio-4d344ae292cfd3451cbfaf5d04f951d917f5cb6bd4f129ba871fe1faaf655b84 WatchSource:0}: Error finding container 4d344ae292cfd3451cbfaf5d04f951d917f5cb6bd4f129ba871fe1faaf655b84: Status 404 returned error can't find the container with id 4d344ae292cfd3451cbfaf5d04f951d917f5cb6bd4f129ba871fe1faaf655b84 Mar 10 19:03:09 crc kubenswrapper[4861]: I0310 19:03:09.871418 4861 generic.go:334] "Generic (PLEG): container finished" podID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerID="c07acce8dc51583f0863e490021e48be8de6012a2af527352b7560bc8e117e9b" exitCode=0 Mar 10 19:03:09 crc kubenswrapper[4861]: I0310 19:03:09.871746 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerDied","Data":"c07acce8dc51583f0863e490021e48be8de6012a2af527352b7560bc8e117e9b"} Mar 10 19:03:09 crc kubenswrapper[4861]: I0310 19:03:09.871772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerStarted","Data":"4d344ae292cfd3451cbfaf5d04f951d917f5cb6bd4f129ba871fe1faaf655b84"} Mar 10 19:03:10 crc kubenswrapper[4861]: I0310 19:03:10.979566 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:10 crc kubenswrapper[4861]: I0310 19:03:10.979693 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:11 crc kubenswrapper[4861]: I0310 19:03:11.888149 4861 generic.go:334] "Generic (PLEG): container finished" podID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerID="013dda81a397e03bc4f6f7458add65617c91d637c2d3833ebabcb6dda570b699" exitCode=0 Mar 10 19:03:11 crc kubenswrapper[4861]: I0310 19:03:11.888203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerDied","Data":"013dda81a397e03bc4f6f7458add65617c91d637c2d3833ebabcb6dda570b699"} Mar 10 19:03:11 crc kubenswrapper[4861]: I0310 19:03:11.893864 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" event={"ID":"ecb4ad72-c43c-4a90-8808-92ca31191f2c","Type":"ContainerStarted","Data":"1056f91fbc537e2164930dbb3116add3c6f78e544f624af6e67e54449cd97a01"} Mar 10 19:03:11 crc kubenswrapper[4861]: I0310 19:03:11.938750 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-mxv85" podStartSLOduration=1.35867804 podStartE2EDuration="3.938701316s" podCreationTimestamp="2026-03-10 19:03:08 +0000 UTC" firstStartedPulling="2026-03-10 19:03:08.84256441 +0000 UTC m=+932.606000370" lastFinishedPulling="2026-03-10 19:03:11.422587686 +0000 UTC m=+935.186023646" observedRunningTime="2026-03-10 19:03:11.931675982 +0000 UTC m=+935.695111942" watchObservedRunningTime="2026-03-10 19:03:11.938701316 +0000 UTC m=+935.702137286" Mar 10 19:03:12 crc kubenswrapper[4861]: I0310 19:03:12.034008 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8v9ps" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="registry-server" probeResult="failure" output=< Mar 10 19:03:12 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 19:03:12 crc kubenswrapper[4861]: > Mar 10 19:03:12 crc kubenswrapper[4861]: I0310 19:03:12.905848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerStarted","Data":"6561cf0314ff33615d4a5d5a2f86289ddc2cf8bbaa936aae5b6a7ad679bb1e38"} Mar 10 19:03:12 crc kubenswrapper[4861]: I0310 19:03:12.941297 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vcbcm" podStartSLOduration=2.484095293 podStartE2EDuration="4.941271813s" podCreationTimestamp="2026-03-10 19:03:08 +0000 UTC" firstStartedPulling="2026-03-10 19:03:09.873639027 +0000 UTC m=+933.637074987" lastFinishedPulling="2026-03-10 19:03:12.330815517 +0000 UTC m=+936.094251507" observedRunningTime="2026-03-10 19:03:12.93363384 +0000 UTC m=+936.697069840" watchObservedRunningTime="2026-03-10 19:03:12.941271813 +0000 UTC m=+936.704707813" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.096087 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-grnnr"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.097067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.099042 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7kshs" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.124492 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-grnnr"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.131197 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.132260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.140615 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.145959 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.152213 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ptwwk"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.153113 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.217389 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xqr\" (UniqueName: \"kubernetes.io/projected/1a246ed4-9906-4d0b-85bd-fab444e3e5e8-kube-api-access-45xqr\") pod \"nmstate-metrics-69594cc75-grnnr\" (UID: \"1a246ed4-9906-4d0b-85bd-fab444e3e5e8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.244953 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.245919 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.247882 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lqxjs" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.249131 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.249277 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.264819 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319248 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djltc\" (UniqueName: \"kubernetes.io/projected/511c207e-fabc-45ab-8da9-287d3a7d3889-kube-api-access-djltc\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319308 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xqr\" (UniqueName: \"kubernetes.io/projected/1a246ed4-9906-4d0b-85bd-fab444e3e5e8-kube-api-access-45xqr\") pod \"nmstate-metrics-69594cc75-grnnr\" (UID: \"1a246ed4-9906-4d0b-85bd-fab444e3e5e8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxft\" (UniqueName: \"kubernetes.io/projected/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-kube-api-access-rnxft\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319511 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/511c207e-fabc-45ab-8da9-287d3a7d3889-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319730 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-nmstate-lock\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-ovs-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.319880 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-dbus-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.339283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xqr\" (UniqueName: \"kubernetes.io/projected/1a246ed4-9906-4d0b-85bd-fab444e3e5e8-kube-api-access-45xqr\") pod \"nmstate-metrics-69594cc75-grnnr\" (UID: \"1a246ed4-9906-4d0b-85bd-fab444e3e5e8\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.417466 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420665 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/511c207e-fabc-45ab-8da9-287d3a7d3889-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420787 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4nb\" (UniqueName: \"kubernetes.io/projected/31bb99e7-c4f3-4a27-99a8-e81527592813-kube-api-access-5b4nb\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420819 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-nmstate-lock\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420842 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-ovs-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420869 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/31bb99e7-c4f3-4a27-99a8-e81527592813-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420899 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bb99e7-c4f3-4a27-99a8-e81527592813-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-dbus-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420930 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-ovs-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.420958 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djltc\" (UniqueName: \"kubernetes.io/projected/511c207e-fabc-45ab-8da9-287d3a7d3889-kube-api-access-djltc\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.421069 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxft\" (UniqueName: \"kubernetes.io/projected/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-kube-api-access-rnxft\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.421144 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-dbus-socket\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.421212 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-nmstate-lock\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.424466 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b5fbf8df9-k8pzt"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.425307 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.430093 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/511c207e-fabc-45ab-8da9-287d3a7d3889-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.441158 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b5fbf8df9-k8pzt"] Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.457558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djltc\" (UniqueName: \"kubernetes.io/projected/511c207e-fabc-45ab-8da9-287d3a7d3889-kube-api-access-djltc\") pod \"nmstate-webhook-786f45cff4-l6z4v\" (UID: \"511c207e-fabc-45ab-8da9-287d3a7d3889\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.461501 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxft\" (UniqueName: \"kubernetes.io/projected/885ff0f1-fcc9-4add-8284-2af1e15a4c2f-kube-api-access-rnxft\") pod \"nmstate-handler-ptwwk\" (UID: \"885ff0f1-fcc9-4add-8284-2af1e15a4c2f\") " pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.472660 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.482954 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:17 crc kubenswrapper[4861]: W0310 19:03:17.517832 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885ff0f1_fcc9_4add_8284_2af1e15a4c2f.slice/crio-42cccd447ecc5d3c244dec9ae9c9593a077bba24904eab13adc9170a05150706 WatchSource:0}: Error finding container 42cccd447ecc5d3c244dec9ae9c9593a077bba24904eab13adc9170a05150706: Status 404 returned error can't find the container with id 42cccd447ecc5d3c244dec9ae9c9593a077bba24904eab13adc9170a05150706 Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.521868 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.521924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-service-ca\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.521963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-console-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.521988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-oauth-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-trusted-ca-bundle\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522091 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578rl\" (UniqueName: \"kubernetes.io/projected/c06e99e6-3892-49c3-a290-f66a86a3993b-kube-api-access-578rl\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4nb\" (UniqueName: \"kubernetes.io/projected/31bb99e7-c4f3-4a27-99a8-e81527592813-kube-api-access-5b4nb\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522162 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-oauth-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522188 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/31bb99e7-c4f3-4a27-99a8-e81527592813-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.522217 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bb99e7-c4f3-4a27-99a8-e81527592813-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.523742 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/31bb99e7-c4f3-4a27-99a8-e81527592813-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.535400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bb99e7-c4f3-4a27-99a8-e81527592813-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.544070 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4nb\" (UniqueName: \"kubernetes.io/projected/31bb99e7-c4f3-4a27-99a8-e81527592813-kube-api-access-5b4nb\") pod \"nmstate-console-plugin-5dcbbd79cf-q2p6w\" (UID: \"31bb99e7-c4f3-4a27-99a8-e81527592813\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.560218 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623178 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-oauth-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623359 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-service-ca\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623408 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-console-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-oauth-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623521 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-trusted-ca-bundle\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.623568 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578rl\" (UniqueName: \"kubernetes.io/projected/c06e99e6-3892-49c3-a290-f66a86a3993b-kube-api-access-578rl\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.624299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-console-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.624573 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-trusted-ca-bundle\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.624958 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-oauth-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.625546 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c06e99e6-3892-49c3-a290-f66a86a3993b-service-ca\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.627499 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-oauth-config\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.628199 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c06e99e6-3892-49c3-a290-f66a86a3993b-console-serving-cert\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.641550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578rl\" (UniqueName: \"kubernetes.io/projected/c06e99e6-3892-49c3-a290-f66a86a3993b-kube-api-access-578rl\") pod \"console-5b5fbf8df9-k8pzt\" (UID: \"c06e99e6-3892-49c3-a290-f66a86a3993b\") " pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.688573 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-grnnr"] Mar 10 19:03:17 crc kubenswrapper[4861]: W0310 19:03:17.692557 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a246ed4_9906_4d0b_85bd_fab444e3e5e8.slice/crio-3489ca33b572f35753e0d82ed7be2e3425e24d5775a97f8e5191b14422ac4429 WatchSource:0}: Error finding container 3489ca33b572f35753e0d82ed7be2e3425e24d5775a97f8e5191b14422ac4429: Status 404 returned error can't find the container with id 3489ca33b572f35753e0d82ed7be2e3425e24d5775a97f8e5191b14422ac4429 Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.728474 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v"] Mar 10 19:03:17 crc kubenswrapper[4861]: W0310 19:03:17.739682 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod511c207e_fabc_45ab_8da9_287d3a7d3889.slice/crio-d2700fbfc60ee41f6e5b37f6dd423f1eda185016bd7da9f738d3a3d3b833e659 WatchSource:0}: Error finding container d2700fbfc60ee41f6e5b37f6dd423f1eda185016bd7da9f738d3a3d3b833e659: Status 404 returned error can't find the container with id d2700fbfc60ee41f6e5b37f6dd423f1eda185016bd7da9f738d3a3d3b833e659 Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.765634 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w"] Mar 10 19:03:17 crc kubenswrapper[4861]: W0310 19:03:17.770771 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31bb99e7_c4f3_4a27_99a8_e81527592813.slice/crio-eec1024d9d5e37080f15661a9759618cd25ab411b20285f56263350028600bab WatchSource:0}: Error finding container eec1024d9d5e37080f15661a9759618cd25ab411b20285f56263350028600bab: Status 404 returned error can't find the container with id eec1024d9d5e37080f15661a9759618cd25ab411b20285f56263350028600bab Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.794972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.939854 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" event={"ID":"1a246ed4-9906-4d0b-85bd-fab444e3e5e8","Type":"ContainerStarted","Data":"3489ca33b572f35753e0d82ed7be2e3425e24d5775a97f8e5191b14422ac4429"} Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.943450 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" event={"ID":"511c207e-fabc-45ab-8da9-287d3a7d3889","Type":"ContainerStarted","Data":"d2700fbfc60ee41f6e5b37f6dd423f1eda185016bd7da9f738d3a3d3b833e659"} Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.944420 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" event={"ID":"31bb99e7-c4f3-4a27-99a8-e81527592813","Type":"ContainerStarted","Data":"eec1024d9d5e37080f15661a9759618cd25ab411b20285f56263350028600bab"} Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.946023 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ptwwk" event={"ID":"885ff0f1-fcc9-4add-8284-2af1e15a4c2f","Type":"ContainerStarted","Data":"42cccd447ecc5d3c244dec9ae9c9593a077bba24904eab13adc9170a05150706"} Mar 10 19:03:17 crc kubenswrapper[4861]: I0310 19:03:17.972233 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b5fbf8df9-k8pzt"] Mar 10 19:03:17 crc kubenswrapper[4861]: W0310 19:03:17.980557 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06e99e6_3892_49c3_a290_f66a86a3993b.slice/crio-0d92ff928c796304fef8f8a3e8a4beb6dce735a914a9b16348bd9814ff32e05f WatchSource:0}: Error finding container 0d92ff928c796304fef8f8a3e8a4beb6dce735a914a9b16348bd9814ff32e05f: Status 404 returned error can't find the container with id 0d92ff928c796304fef8f8a3e8a4beb6dce735a914a9b16348bd9814ff32e05f Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.746446 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.746513 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.832133 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.952601 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b5fbf8df9-k8pzt" event={"ID":"c06e99e6-3892-49c3-a290-f66a86a3993b","Type":"ContainerStarted","Data":"16746926bd5e67970852e42e235d444e67e48e28652b710983473af5d4ec61e8"} Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.952646 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b5fbf8df9-k8pzt" event={"ID":"c06e99e6-3892-49c3-a290-f66a86a3993b","Type":"ContainerStarted","Data":"0d92ff928c796304fef8f8a3e8a4beb6dce735a914a9b16348bd9814ff32e05f"} Mar 10 19:03:18 crc kubenswrapper[4861]: I0310 19:03:18.980069 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b5fbf8df9-k8pzt" podStartSLOduration=1.980050738 podStartE2EDuration="1.980050738s" podCreationTimestamp="2026-03-10 19:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:03:18.973842787 +0000 UTC m=+942.737278767" watchObservedRunningTime="2026-03-10 19:03:18.980050738 +0000 UTC m=+942.743486688" Mar 10 19:03:19 crc kubenswrapper[4861]: I0310 19:03:19.017827 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.078418 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.127784 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.227593 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.983800 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vcbcm" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="registry-server" containerID="cri-o://6561cf0314ff33615d4a5d5a2f86289ddc2cf8bbaa936aae5b6a7ad679bb1e38" gracePeriod=2 Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.992418 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.992485 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.992542 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.993043 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:03:21 crc kubenswrapper[4861]: I0310 19:03:21.993112 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951" gracePeriod=600 Mar 10 19:03:22 crc kubenswrapper[4861]: I0310 19:03:22.995680 4861 generic.go:334] "Generic (PLEG): container finished" podID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerID="6561cf0314ff33615d4a5d5a2f86289ddc2cf8bbaa936aae5b6a7ad679bb1e38" exitCode=0 Mar 10 19:03:22 crc kubenswrapper[4861]: I0310 19:03:22.995754 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerDied","Data":"6561cf0314ff33615d4a5d5a2f86289ddc2cf8bbaa936aae5b6a7ad679bb1e38"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.000386 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951" exitCode=0 Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.000448 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.000480 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.000498 4861 scope.go:117] "RemoveContainer" containerID="e68bdef632db05ab7bbd34b45c2c1a40878e3b61d13aac06e84c64a884230517" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.007313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" event={"ID":"511c207e-fabc-45ab-8da9-287d3a7d3889","Type":"ContainerStarted","Data":"b21632077fd424b6a498fe4a5355657ff5cee7ce8e49c4feb5de828be3773a8e"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.007971 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.009868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" event={"ID":"31bb99e7-c4f3-4a27-99a8-e81527592813","Type":"ContainerStarted","Data":"22a3fe53349eed6bb6c64f6485e13863c4ac4eaee80a5a2a3930b2e9fea357c8"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.012132 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ptwwk" event={"ID":"885ff0f1-fcc9-4add-8284-2af1e15a4c2f","Type":"ContainerStarted","Data":"30798c66f224e6abdf798dfa30e246cc1119e1b2e9c77b24bb0917851b3b37d4"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.012204 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.023915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" event={"ID":"1a246ed4-9906-4d0b-85bd-fab444e3e5e8","Type":"ContainerStarted","Data":"262d78f7604b1d3b79d910605c3451d1193b154bbd2d0d4534fdbd5e69c89e92"} Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.066066 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" podStartSLOduration=2.888202907 podStartE2EDuration="6.066049591s" podCreationTimestamp="2026-03-10 19:03:17 +0000 UTC" firstStartedPulling="2026-03-10 19:03:17.741091541 +0000 UTC m=+941.504527511" lastFinishedPulling="2026-03-10 19:03:20.918938195 +0000 UTC m=+944.682374195" observedRunningTime="2026-03-10 19:03:23.033287947 +0000 UTC m=+946.796723937" watchObservedRunningTime="2026-03-10 19:03:23.066049591 +0000 UTC m=+946.829485541" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.071196 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ptwwk" podStartSLOduration=2.66765616 podStartE2EDuration="6.07117215s" podCreationTimestamp="2026-03-10 19:03:17 +0000 UTC" firstStartedPulling="2026-03-10 19:03:17.520298356 +0000 UTC m=+941.283734316" lastFinishedPulling="2026-03-10 19:03:20.923814306 +0000 UTC m=+944.687250306" observedRunningTime="2026-03-10 19:03:23.05742104 +0000 UTC m=+946.820857030" watchObservedRunningTime="2026-03-10 19:03:23.07117215 +0000 UTC m=+946.834608130" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.074538 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q2p6w" podStartSLOduration=2.9505452720000003 podStartE2EDuration="6.074528118s" podCreationTimestamp="2026-03-10 19:03:17 +0000 UTC" firstStartedPulling="2026-03-10 19:03:17.772683561 +0000 UTC m=+941.536119521" lastFinishedPulling="2026-03-10 19:03:20.896666397 +0000 UTC m=+944.660102367" observedRunningTime="2026-03-10 19:03:23.071884761 +0000 UTC m=+946.835320751" watchObservedRunningTime="2026-03-10 19:03:23.074528118 +0000 UTC m=+946.837964098" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.101011 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.236949 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rhq\" (UniqueName: \"kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq\") pod \"81e5c465-c161-4d8f-8f21-508b7e56cd95\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.237105 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities\") pod \"81e5c465-c161-4d8f-8f21-508b7e56cd95\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.237166 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content\") pod \"81e5c465-c161-4d8f-8f21-508b7e56cd95\" (UID: \"81e5c465-c161-4d8f-8f21-508b7e56cd95\") " Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.238028 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities" (OuterVolumeSpecName: "utilities") pod "81e5c465-c161-4d8f-8f21-508b7e56cd95" (UID: "81e5c465-c161-4d8f-8f21-508b7e56cd95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.258682 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq" (OuterVolumeSpecName: "kube-api-access-s4rhq") pod "81e5c465-c161-4d8f-8f21-508b7e56cd95" (UID: "81e5c465-c161-4d8f-8f21-508b7e56cd95"). InnerVolumeSpecName "kube-api-access-s4rhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.261885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81e5c465-c161-4d8f-8f21-508b7e56cd95" (UID: "81e5c465-c161-4d8f-8f21-508b7e56cd95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.338513 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.338545 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rhq\" (UniqueName: \"kubernetes.io/projected/81e5c465-c161-4d8f-8f21-508b7e56cd95-kube-api-access-s4rhq\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:23 crc kubenswrapper[4861]: I0310 19:03:23.338556 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81e5c465-c161-4d8f-8f21-508b7e56cd95-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.045399 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcbcm" event={"ID":"81e5c465-c161-4d8f-8f21-508b7e56cd95","Type":"ContainerDied","Data":"4d344ae292cfd3451cbfaf5d04f951d917f5cb6bd4f129ba871fe1faaf655b84"} Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.045752 4861 scope.go:117] "RemoveContainer" containerID="6561cf0314ff33615d4a5d5a2f86289ddc2cf8bbaa936aae5b6a7ad679bb1e38" Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.045458 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcbcm" Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.076819 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.079380 4861 scope.go:117] "RemoveContainer" containerID="013dda81a397e03bc4f6f7458add65617c91d637c2d3833ebabcb6dda570b699" Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.080862 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcbcm"] Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.114687 4861 scope.go:117] "RemoveContainer" containerID="c07acce8dc51583f0863e490021e48be8de6012a2af527352b7560bc8e117e9b" Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.631899 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.632582 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8v9ps" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="registry-server" containerID="cri-o://0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d" gracePeriod=2 Mar 10 19:03:24 crc kubenswrapper[4861]: I0310 19:03:24.966876 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" path="/var/lib/kubelet/pods/81e5c465-c161-4d8f-8f21-508b7e56cd95/volumes" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.000763 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.057042 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" event={"ID":"1a246ed4-9906-4d0b-85bd-fab444e3e5e8","Type":"ContainerStarted","Data":"718c4d3de1c173c4d42462328690242a4de970389a71089a8cfa8749aebb7314"} Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.086174 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerID="0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d" exitCode=0 Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.086794 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9ps" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.086908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerDied","Data":"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d"} Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.086936 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9ps" event={"ID":"d3df69b3-b943-423c-8317-147c63c7e1c5","Type":"ContainerDied","Data":"e7abfecdf4ae6e8ed7bfdc181771d171ed334492425b8e14a75ac64fc31a9931"} Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.086953 4861 scope.go:117] "RemoveContainer" containerID="0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.117323 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-grnnr" podStartSLOduration=1.160350973 podStartE2EDuration="8.117307948s" podCreationTimestamp="2026-03-10 19:03:17 +0000 UTC" firstStartedPulling="2026-03-10 19:03:17.695241557 +0000 UTC m=+941.458677527" lastFinishedPulling="2026-03-10 19:03:24.652198552 +0000 UTC m=+948.415634502" observedRunningTime="2026-03-10 19:03:25.113882178 +0000 UTC m=+948.877318148" watchObservedRunningTime="2026-03-10 19:03:25.117307948 +0000 UTC m=+948.880743908" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.127860 4861 scope.go:117] "RemoveContainer" containerID="1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.156168 4861 scope.go:117] "RemoveContainer" containerID="f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.170986 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content\") pod \"d3df69b3-b943-423c-8317-147c63c7e1c5\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.171068 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv8x4\" (UniqueName: \"kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4\") pod \"d3df69b3-b943-423c-8317-147c63c7e1c5\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.171107 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities\") pod \"d3df69b3-b943-423c-8317-147c63c7e1c5\" (UID: \"d3df69b3-b943-423c-8317-147c63c7e1c5\") " Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.172922 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities" (OuterVolumeSpecName: "utilities") pod "d3df69b3-b943-423c-8317-147c63c7e1c5" (UID: "d3df69b3-b943-423c-8317-147c63c7e1c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.183003 4861 scope.go:117] "RemoveContainer" containerID="0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.190947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4" (OuterVolumeSpecName: "kube-api-access-vv8x4") pod "d3df69b3-b943-423c-8317-147c63c7e1c5" (UID: "d3df69b3-b943-423c-8317-147c63c7e1c5"). InnerVolumeSpecName "kube-api-access-vv8x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:03:25 crc kubenswrapper[4861]: E0310 19:03:25.194565 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d\": container with ID starting with 0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d not found: ID does not exist" containerID="0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.194601 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d"} err="failed to get container status \"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d\": rpc error: code = NotFound desc = could not find container \"0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d\": container with ID starting with 0262ea853e304ee042db16b158931b9ca3487a195aa799b26b7a8b72d0f98f0d not found: ID does not exist" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.194633 4861 scope.go:117] "RemoveContainer" containerID="1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6" Mar 10 19:03:25 crc kubenswrapper[4861]: E0310 19:03:25.201067 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6\": container with ID starting with 1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6 not found: ID does not exist" containerID="1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.201109 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6"} err="failed to get container status \"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6\": rpc error: code = NotFound desc = could not find container \"1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6\": container with ID starting with 1c6ffc2d7d81beb18b08fa1977b64f00af4bf39ad74f2f2c70954b150988a4f6 not found: ID does not exist" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.201137 4861 scope.go:117] "RemoveContainer" containerID="f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463" Mar 10 19:03:25 crc kubenswrapper[4861]: E0310 19:03:25.203753 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463\": container with ID starting with f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463 not found: ID does not exist" containerID="f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.203781 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463"} err="failed to get container status \"f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463\": rpc error: code = NotFound desc = could not find container \"f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463\": container with ID starting with f9f4a8f18524e235fa399b1f87ce48b76da0e982fb02fe17c19a93b18d9fd463 not found: ID does not exist" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.272935 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv8x4\" (UniqueName: \"kubernetes.io/projected/d3df69b3-b943-423c-8317-147c63c7e1c5-kube-api-access-vv8x4\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.272977 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.318608 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3df69b3-b943-423c-8317-147c63c7e1c5" (UID: "d3df69b3-b943-423c-8317-147c63c7e1c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.374704 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3df69b3-b943-423c-8317-147c63c7e1c5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.425804 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:25 crc kubenswrapper[4861]: I0310 19:03:25.433028 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8v9ps"] Mar 10 19:03:26 crc kubenswrapper[4861]: I0310 19:03:26.972514 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" path="/var/lib/kubelet/pods/d3df69b3-b943-423c-8317-147c63c7e1c5/volumes" Mar 10 19:03:27 crc kubenswrapper[4861]: I0310 19:03:27.512874 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ptwwk" Mar 10 19:03:27 crc kubenswrapper[4861]: I0310 19:03:27.796222 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:27 crc kubenswrapper[4861]: I0310 19:03:27.796327 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:27 crc kubenswrapper[4861]: I0310 19:03:27.803719 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:28 crc kubenswrapper[4861]: I0310 19:03:28.117993 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b5fbf8df9-k8pzt" Mar 10 19:03:28 crc kubenswrapper[4861]: I0310 19:03:28.201019 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 19:03:37 crc kubenswrapper[4861]: I0310 19:03:37.482735 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-l6z4v" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.989429 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8"] Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990329 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="extract-utilities" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990352 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="extract-utilities" Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990382 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="extract-content" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990395 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="extract-content" Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990429 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990443 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990461 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="extract-utilities" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990474 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="extract-utilities" Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990503 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="extract-content" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990516 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="extract-content" Mar 10 19:03:51 crc kubenswrapper[4861]: E0310 19:03:51.990536 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990549 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990827 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e5c465-c161-4d8f-8f21-508b7e56cd95" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.990869 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3df69b3-b943-423c-8317-147c63c7e1c5" containerName="registry-server" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.992286 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:51 crc kubenswrapper[4861]: I0310 19:03:51.994285 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.003352 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8"] Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.007145 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.007245 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlph\" (UniqueName: \"kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.007324 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.108649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djlph\" (UniqueName: \"kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.108983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.109034 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.109544 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.109825 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.137656 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djlph\" (UniqueName: \"kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.316554 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:52 crc kubenswrapper[4861]: I0310 19:03:52.584448 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8"] Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.254902 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zg9g7" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerName="console" containerID="cri-o://c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5" gracePeriod=15 Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.316221 4861 generic.go:334] "Generic (PLEG): container finished" podID="53bcff5b-e791-43cc-a898-51474367544d" containerID="fc94a5b009066cb20a2252ad5de9f87434a1a0d762dbe4bcdc59669f3b8a39ea" exitCode=0 Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.316363 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" event={"ID":"53bcff5b-e791-43cc-a898-51474367544d","Type":"ContainerDied","Data":"fc94a5b009066cb20a2252ad5de9f87434a1a0d762dbe4bcdc59669f3b8a39ea"} Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.316796 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" event={"ID":"53bcff5b-e791-43cc-a898-51474367544d","Type":"ContainerStarted","Data":"28d376f71a2e88f69412ab84dfeaea0c79f94d999cdf4ed7446476fabc738975"} Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.701551 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zg9g7_3db8cb04-007c-48f9-a986-7a503ca1c077/console/0.log" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.701659 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838066 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838212 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838269 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838295 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22mdp\" (UniqueName: \"kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.838325 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert\") pod \"3db8cb04-007c-48f9-a986-7a503ca1c077\" (UID: \"3db8cb04-007c-48f9-a986-7a503ca1c077\") " Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839356 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config" (OuterVolumeSpecName: "console-config") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839391 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839474 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839525 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca" (OuterVolumeSpecName: "service-ca") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839857 4861 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839884 4861 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-service-ca\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839897 4861 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-console-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.839911 4861 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3db8cb04-007c-48f9-a986-7a503ca1c077-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.848554 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.850230 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp" (OuterVolumeSpecName: "kube-api-access-22mdp") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "kube-api-access-22mdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.850489 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3db8cb04-007c-48f9-a986-7a503ca1c077" (UID: "3db8cb04-007c-48f9-a986-7a503ca1c077"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.941813 4861 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.941880 4861 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3db8cb04-007c-48f9-a986-7a503ca1c077-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:53 crc kubenswrapper[4861]: I0310 19:03:53.941903 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22mdp\" (UniqueName: \"kubernetes.io/projected/3db8cb04-007c-48f9-a986-7a503ca1c077-kube-api-access-22mdp\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.328492 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zg9g7_3db8cb04-007c-48f9-a986-7a503ca1c077/console/0.log" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.328898 4861 generic.go:334] "Generic (PLEG): container finished" podID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerID="c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5" exitCode=2 Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.328929 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zg9g7" event={"ID":"3db8cb04-007c-48f9-a986-7a503ca1c077","Type":"ContainerDied","Data":"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5"} Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.328964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zg9g7" event={"ID":"3db8cb04-007c-48f9-a986-7a503ca1c077","Type":"ContainerDied","Data":"0dd392ccb57f2021dad9ccfdeb0473b3525c0082ab5f36fae8bf1056dd441631"} Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.328994 4861 scope.go:117] "RemoveContainer" containerID="c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.329014 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zg9g7" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.377270 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.381354 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zg9g7"] Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.432397 4861 scope.go:117] "RemoveContainer" containerID="c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5" Mar 10 19:03:54 crc kubenswrapper[4861]: E0310 19:03:54.433176 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5\": container with ID starting with c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5 not found: ID does not exist" containerID="c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.433250 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5"} err="failed to get container status \"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5\": rpc error: code = NotFound desc = could not find container \"c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5\": container with ID starting with c5f81b5bc6a77ca3c489c1e6b13f6893dc8f7c11b4bf61e262e56934e7a644f5 not found: ID does not exist" Mar 10 19:03:54 crc kubenswrapper[4861]: I0310 19:03:54.970101 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" path="/var/lib/kubelet/pods/3db8cb04-007c-48f9-a986-7a503ca1c077/volumes" Mar 10 19:03:55 crc kubenswrapper[4861]: I0310 19:03:55.339622 4861 generic.go:334] "Generic (PLEG): container finished" podID="53bcff5b-e791-43cc-a898-51474367544d" containerID="c1382278bfa6d53b03cc7fd07ada521593f9c5d6b2e3813a490ccfee135dbe7a" exitCode=0 Mar 10 19:03:55 crc kubenswrapper[4861]: I0310 19:03:55.339745 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" event={"ID":"53bcff5b-e791-43cc-a898-51474367544d","Type":"ContainerDied","Data":"c1382278bfa6d53b03cc7fd07ada521593f9c5d6b2e3813a490ccfee135dbe7a"} Mar 10 19:03:56 crc kubenswrapper[4861]: I0310 19:03:56.350177 4861 generic.go:334] "Generic (PLEG): container finished" podID="53bcff5b-e791-43cc-a898-51474367544d" containerID="74c0b92d58346e1e22cfe584c3cbb1b2da8b3d4a015c96903682c167d4047e03" exitCode=0 Mar 10 19:03:56 crc kubenswrapper[4861]: I0310 19:03:56.350240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" event={"ID":"53bcff5b-e791-43cc-a898-51474367544d","Type":"ContainerDied","Data":"74c0b92d58346e1e22cfe584c3cbb1b2da8b3d4a015c96903682c167d4047e03"} Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.675853 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.703474 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djlph\" (UniqueName: \"kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph\") pod \"53bcff5b-e791-43cc-a898-51474367544d\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.703736 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle\") pod \"53bcff5b-e791-43cc-a898-51474367544d\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.703810 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util\") pod \"53bcff5b-e791-43cc-a898-51474367544d\" (UID: \"53bcff5b-e791-43cc-a898-51474367544d\") " Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.707964 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle" (OuterVolumeSpecName: "bundle") pod "53bcff5b-e791-43cc-a898-51474367544d" (UID: "53bcff5b-e791-43cc-a898-51474367544d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.710130 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph" (OuterVolumeSpecName: "kube-api-access-djlph") pod "53bcff5b-e791-43cc-a898-51474367544d" (UID: "53bcff5b-e791-43cc-a898-51474367544d"). InnerVolumeSpecName "kube-api-access-djlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.738882 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util" (OuterVolumeSpecName: "util") pod "53bcff5b-e791-43cc-a898-51474367544d" (UID: "53bcff5b-e791-43cc-a898-51474367544d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.805699 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-util\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.805783 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djlph\" (UniqueName: \"kubernetes.io/projected/53bcff5b-e791-43cc-a898-51474367544d-kube-api-access-djlph\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:57 crc kubenswrapper[4861]: I0310 19:03:57.805805 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53bcff5b-e791-43cc-a898-51474367544d-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:03:58 crc kubenswrapper[4861]: I0310 19:03:58.367874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" event={"ID":"53bcff5b-e791-43cc-a898-51474367544d","Type":"ContainerDied","Data":"28d376f71a2e88f69412ab84dfeaea0c79f94d999cdf4ed7446476fabc738975"} Mar 10 19:03:58 crc kubenswrapper[4861]: I0310 19:03:58.367934 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d376f71a2e88f69412ab84dfeaea0c79f94d999cdf4ed7446476fabc738975" Mar 10 19:03:58 crc kubenswrapper[4861]: I0310 19:03:58.367970 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.149611 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552824-ftlrt"] Mar 10 19:04:00 crc kubenswrapper[4861]: E0310 19:04:00.150457 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerName="console" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150485 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerName="console" Mar 10 19:04:00 crc kubenswrapper[4861]: E0310 19:04:00.150504 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="extract" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150517 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="extract" Mar 10 19:04:00 crc kubenswrapper[4861]: E0310 19:04:00.150541 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="util" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150554 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="util" Mar 10 19:04:00 crc kubenswrapper[4861]: E0310 19:04:00.150573 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="pull" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150585 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="pull" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150826 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bcff5b-e791-43cc-a898-51474367544d" containerName="extract" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.150846 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db8cb04-007c-48f9-a986-7a503ca1c077" containerName="console" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.151430 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.153962 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.158375 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.161808 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552824-ftlrt"] Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.163141 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.241513 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj\") pod \"auto-csr-approver-29552824-ftlrt\" (UID: \"e3ad2728-5796-4d31-ab8d-2a18a41b5687\") " pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.342924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj\") pod \"auto-csr-approver-29552824-ftlrt\" (UID: \"e3ad2728-5796-4d31-ab8d-2a18a41b5687\") " pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.362771 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj\") pod \"auto-csr-approver-29552824-ftlrt\" (UID: \"e3ad2728-5796-4d31-ab8d-2a18a41b5687\") " pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.481024 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:00 crc kubenswrapper[4861]: I0310 19:04:00.797660 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552824-ftlrt"] Mar 10 19:04:01 crc kubenswrapper[4861]: I0310 19:04:01.390036 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" event={"ID":"e3ad2728-5796-4d31-ab8d-2a18a41b5687","Type":"ContainerStarted","Data":"2f64b89b39d9838d2e31e32642955e363166912c4c78dc77c6fa3ffd4d1df266"} Mar 10 19:04:02 crc kubenswrapper[4861]: I0310 19:04:02.400691 4861 generic.go:334] "Generic (PLEG): container finished" podID="e3ad2728-5796-4d31-ab8d-2a18a41b5687" containerID="4eb3c3c03c4cfc98f63a24b33a56fccb9c3dc632a0be220ab913b7c8c5d3a577" exitCode=0 Mar 10 19:04:02 crc kubenswrapper[4861]: I0310 19:04:02.400792 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" event={"ID":"e3ad2728-5796-4d31-ab8d-2a18a41b5687","Type":"ContainerDied","Data":"4eb3c3c03c4cfc98f63a24b33a56fccb9c3dc632a0be220ab913b7c8c5d3a577"} Mar 10 19:04:03 crc kubenswrapper[4861]: I0310 19:04:03.726277 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:03 crc kubenswrapper[4861]: I0310 19:04:03.791984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj\") pod \"e3ad2728-5796-4d31-ab8d-2a18a41b5687\" (UID: \"e3ad2728-5796-4d31-ab8d-2a18a41b5687\") " Mar 10 19:04:03 crc kubenswrapper[4861]: I0310 19:04:03.798462 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj" (OuterVolumeSpecName: "kube-api-access-fvbdj") pod "e3ad2728-5796-4d31-ab8d-2a18a41b5687" (UID: "e3ad2728-5796-4d31-ab8d-2a18a41b5687"). InnerVolumeSpecName "kube-api-access-fvbdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:04:03 crc kubenswrapper[4861]: I0310 19:04:03.893116 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/e3ad2728-5796-4d31-ab8d-2a18a41b5687-kube-api-access-fvbdj\") on node \"crc\" DevicePath \"\"" Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.416656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" event={"ID":"e3ad2728-5796-4d31-ab8d-2a18a41b5687","Type":"ContainerDied","Data":"2f64b89b39d9838d2e31e32642955e363166912c4c78dc77c6fa3ffd4d1df266"} Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.416726 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552824-ftlrt" Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.416747 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f64b89b39d9838d2e31e32642955e363166912c4c78dc77c6fa3ffd4d1df266" Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.822107 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552818-2xtjk"] Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.829620 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552818-2xtjk"] Mar 10 19:04:04 crc kubenswrapper[4861]: I0310 19:04:04.966500 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b460b8b-19f4-4c19-a908-24cf3ceda286" path="/var/lib/kubelet/pods/1b460b8b-19f4-4c19-a908-24cf3ceda286/volumes" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.160336 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj"] Mar 10 19:04:08 crc kubenswrapper[4861]: E0310 19:04:08.160794 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ad2728-5796-4d31-ab8d-2a18a41b5687" containerName="oc" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.160806 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ad2728-5796-4d31-ab8d-2a18a41b5687" containerName="oc" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.160918 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ad2728-5796-4d31-ab8d-2a18a41b5687" containerName="oc" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.161267 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.168080 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.168467 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.168496 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.168918 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.168969 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p74kv" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.180062 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj"] Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.358010 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-webhook-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.358049 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fsc\" (UniqueName: \"kubernetes.io/projected/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-kube-api-access-69fsc\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.358087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-apiservice-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.459597 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-webhook-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.459649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fsc\" (UniqueName: \"kubernetes.io/projected/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-kube-api-access-69fsc\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.459696 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-apiservice-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.463970 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-apiservice-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.464164 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-webhook-cert\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.484496 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fsc\" (UniqueName: \"kubernetes.io/projected/71fe4ef2-5b4c-4842-978e-3fa1b4d71ade-kube-api-access-69fsc\") pod \"metallb-operator-controller-manager-569d9ccd5-tf6qj\" (UID: \"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade\") " pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.502383 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm"] Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.503027 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.505504 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.505743 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vhrpx" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.506530 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.556601 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm"] Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.662257 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mqpg\" (UniqueName: \"kubernetes.io/projected/a772e923-abe7-448d-978c-de1cf0020a82-kube-api-access-2mqpg\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.662344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-apiservice-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.662381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-webhook-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.763679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mqpg\" (UniqueName: \"kubernetes.io/projected/a772e923-abe7-448d-978c-de1cf0020a82-kube-api-access-2mqpg\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.764222 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-apiservice-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.764417 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-webhook-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.768744 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-webhook-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.772170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a772e923-abe7-448d-978c-de1cf0020a82-apiservice-cert\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.783130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.812990 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mqpg\" (UniqueName: \"kubernetes.io/projected/a772e923-abe7-448d-978c-de1cf0020a82-kube-api-access-2mqpg\") pod \"metallb-operator-webhook-server-79b7ddc8f8-stpjm\" (UID: \"a772e923-abe7-448d-978c-de1cf0020a82\") " pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:08 crc kubenswrapper[4861]: I0310 19:04:08.825866 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:09 crc kubenswrapper[4861]: I0310 19:04:09.211672 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj"] Mar 10 19:04:09 crc kubenswrapper[4861]: I0310 19:04:09.316972 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm"] Mar 10 19:04:09 crc kubenswrapper[4861]: W0310 19:04:09.319737 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda772e923_abe7_448d_978c_de1cf0020a82.slice/crio-0e4c263717987ccec6586360e35722f9bf00d6d0502048c36d68ab65953c5390 WatchSource:0}: Error finding container 0e4c263717987ccec6586360e35722f9bf00d6d0502048c36d68ab65953c5390: Status 404 returned error can't find the container with id 0e4c263717987ccec6586360e35722f9bf00d6d0502048c36d68ab65953c5390 Mar 10 19:04:09 crc kubenswrapper[4861]: I0310 19:04:09.446419 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" event={"ID":"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade","Type":"ContainerStarted","Data":"6e5487be4c7751929923eac9244e52c110d464dfe25a50650063644593c666a6"} Mar 10 19:04:09 crc kubenswrapper[4861]: I0310 19:04:09.447717 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" event={"ID":"a772e923-abe7-448d-978c-de1cf0020a82","Type":"ContainerStarted","Data":"0e4c263717987ccec6586360e35722f9bf00d6d0502048c36d68ab65953c5390"} Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.489359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" event={"ID":"71fe4ef2-5b4c-4842-978e-3fa1b4d71ade","Type":"ContainerStarted","Data":"836a3ddc3df7bc45c66b9f11dbd135dc45f38bfe0f42f8831832dc4114ddb052"} Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.490388 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.493434 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" event={"ID":"a772e923-abe7-448d-978c-de1cf0020a82","Type":"ContainerStarted","Data":"cb4d845a7c2340ae77516cffc510d3c7292313ae1e0aabfd187df679685f4704"} Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.493599 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.526518 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" podStartSLOduration=1.5874462889999998 podStartE2EDuration="6.526499209s" podCreationTimestamp="2026-03-10 19:04:08 +0000 UTC" firstStartedPulling="2026-03-10 19:04:09.216904053 +0000 UTC m=+992.980340013" lastFinishedPulling="2026-03-10 19:04:14.155956933 +0000 UTC m=+997.919392933" observedRunningTime="2026-03-10 19:04:14.526007815 +0000 UTC m=+998.289443845" watchObservedRunningTime="2026-03-10 19:04:14.526499209 +0000 UTC m=+998.289935179" Mar 10 19:04:14 crc kubenswrapper[4861]: I0310 19:04:14.556090 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" podStartSLOduration=1.6993183090000001 podStartE2EDuration="6.55606519s" podCreationTimestamp="2026-03-10 19:04:08 +0000 UTC" firstStartedPulling="2026-03-10 19:04:09.323097017 +0000 UTC m=+993.086532977" lastFinishedPulling="2026-03-10 19:04:14.179843858 +0000 UTC m=+997.943279858" observedRunningTime="2026-03-10 19:04:14.5502045 +0000 UTC m=+998.313640520" watchObservedRunningTime="2026-03-10 19:04:14.55606519 +0000 UTC m=+998.319501190" Mar 10 19:04:28 crc kubenswrapper[4861]: I0310 19:04:28.838456 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79b7ddc8f8-stpjm" Mar 10 19:04:37 crc kubenswrapper[4861]: I0310 19:04:37.788389 4861 scope.go:117] "RemoveContainer" containerID="638526d5befcc48a5a0588cdbda3db4c549bdf9d39e06f68c5abd13aff080118" Mar 10 19:04:48 crc kubenswrapper[4861]: I0310 19:04:48.786799 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-569d9ccd5-tf6qj" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.462112 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mxnp2"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.464462 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.467609 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.468373 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wzgt2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.468571 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.468629 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.468688 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.473172 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.478031 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.535060 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bvtm5"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.535927 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.538851 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.538869 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8fvbp" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.538856 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.539219 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.567009 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-zfmmn"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.569875 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.571229 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.585437 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-zfmmn"] Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631176 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631214 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmxh\" (UniqueName: \"kubernetes.io/projected/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-kube-api-access-9mmxh\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631245 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-metrics\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631271 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/f8951eba-63cf-4bd6-a2d6-6829c198ac80-kube-api-access-rf24q\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631292 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631309 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-conf\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metrics-certs\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631341 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-reloader\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631356 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d76161e5-e488-4365-976f-5487ba4fa265-frr-startup\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/d76161e5-e488-4365-976f-5487ba4fa265-kube-api-access-x54xp\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631391 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metallb-excludel2\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631411 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-sockets\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.631430 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d76161e5-e488-4365-976f-5487ba4fa265-metrics-certs\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-metrics\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732316 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-cert\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732342 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/f8951eba-63cf-4bd6-a2d6-6829c198ac80-kube-api-access-rf24q\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732364 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-metrics-certs\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732397 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-conf\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732412 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29kvr\" (UniqueName: \"kubernetes.io/projected/25270671-6926-47af-bb35-43c48308f5fd-kube-api-access-29kvr\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732431 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metrics-certs\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732446 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-reloader\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732462 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d76161e5-e488-4365-976f-5487ba4fa265-frr-startup\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/d76161e5-e488-4365-976f-5487ba4fa265-kube-api-access-x54xp\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metallb-excludel2\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732518 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-sockets\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732539 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d76161e5-e488-4365-976f-5487ba4fa265-metrics-certs\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732561 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.732577 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmxh\" (UniqueName: \"kubernetes.io/projected/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-kube-api-access-9mmxh\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.733291 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-metrics\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: E0310 19:04:49.733497 4861 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 10 19:04:49 crc kubenswrapper[4861]: E0310 19:04:49.733541 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist podName:f8951eba-63cf-4bd6-a2d6-6829c198ac80 nodeName:}" failed. No retries permitted until 2026-03-10 19:04:50.233526717 +0000 UTC m=+1033.996962677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist") pod "speaker-bvtm5" (UID: "f8951eba-63cf-4bd6-a2d6-6829c198ac80") : secret "metallb-memberlist" not found Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.733861 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-conf\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.734404 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-reloader\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.735305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d76161e5-e488-4365-976f-5487ba4fa265-frr-startup\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: E0310 19:04:49.735519 4861 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 10 19:04:49 crc kubenswrapper[4861]: E0310 19:04:49.735582 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert podName:b7c09d64-564e-4dad-91c3-ecaf75f1a6a4 nodeName:}" failed. No retries permitted until 2026-03-10 19:04:50.235566297 +0000 UTC m=+1033.999002257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert") pod "frr-k8s-webhook-server-7f989f654f-mbb6b" (UID: "b7c09d64-564e-4dad-91c3-ecaf75f1a6a4") : secret "frr-k8s-webhook-server-cert" not found Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.736183 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metallb-excludel2\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.736218 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d76161e5-e488-4365-976f-5487ba4fa265-frr-sockets\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.741765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-metrics-certs\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.742159 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d76161e5-e488-4365-976f-5487ba4fa265-metrics-certs\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.750009 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf24q\" (UniqueName: \"kubernetes.io/projected/f8951eba-63cf-4bd6-a2d6-6829c198ac80-kube-api-access-rf24q\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.752968 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/d76161e5-e488-4365-976f-5487ba4fa265-kube-api-access-x54xp\") pod \"frr-k8s-mxnp2\" (UID: \"d76161e5-e488-4365-976f-5487ba4fa265\") " pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.759935 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmxh\" (UniqueName: \"kubernetes.io/projected/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-kube-api-access-9mmxh\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.790640 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.833855 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-cert\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.834074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-metrics-certs\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.834108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29kvr\" (UniqueName: \"kubernetes.io/projected/25270671-6926-47af-bb35-43c48308f5fd-kube-api-access-29kvr\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.835878 4861 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.844211 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-metrics-certs\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.846807 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25270671-6926-47af-bb35-43c48308f5fd-cert\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.849476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29kvr\" (UniqueName: \"kubernetes.io/projected/25270671-6926-47af-bb35-43c48308f5fd-kube-api-access-29kvr\") pod \"controller-86ddb6bd46-zfmmn\" (UID: \"25270671-6926-47af-bb35-43c48308f5fd\") " pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:49 crc kubenswrapper[4861]: I0310 19:04:49.881162 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.241181 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:50 crc kubenswrapper[4861]: E0310 19:04:50.241456 4861 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.241679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:50 crc kubenswrapper[4861]: E0310 19:04:50.241703 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist podName:f8951eba-63cf-4bd6-a2d6-6829c198ac80 nodeName:}" failed. No retries permitted until 2026-03-10 19:04:51.241676253 +0000 UTC m=+1035.005112223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist") pod "speaker-bvtm5" (UID: "f8951eba-63cf-4bd6-a2d6-6829c198ac80") : secret "metallb-memberlist" not found Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.247674 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c09d64-564e-4dad-91c3-ecaf75f1a6a4-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mbb6b\" (UID: \"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.324767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-zfmmn"] Mar 10 19:04:50 crc kubenswrapper[4861]: W0310 19:04:50.331679 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25270671_6926_47af_bb35_43c48308f5fd.slice/crio-23bbfacf2d9046d8fb13287905b758504067a6a56b8cbf7d92d0bfd2a5da257b WatchSource:0}: Error finding container 23bbfacf2d9046d8fb13287905b758504067a6a56b8cbf7d92d0bfd2a5da257b: Status 404 returned error can't find the container with id 23bbfacf2d9046d8fb13287905b758504067a6a56b8cbf7d92d0bfd2a5da257b Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.399625 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:50 crc kubenswrapper[4861]: W0310 19:04:50.672862 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c09d64_564e_4dad_91c3_ecaf75f1a6a4.slice/crio-a18a8850a0b7b69d57a4d37a5a8af68c056f8bc46726618f144eabf8ce4e7d68 WatchSource:0}: Error finding container a18a8850a0b7b69d57a4d37a5a8af68c056f8bc46726618f144eabf8ce4e7d68: Status 404 returned error can't find the container with id a18a8850a0b7b69d57a4d37a5a8af68c056f8bc46726618f144eabf8ce4e7d68 Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.677266 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b"] Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.737463 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-zfmmn" event={"ID":"25270671-6926-47af-bb35-43c48308f5fd","Type":"ContainerStarted","Data":"54eedfaa3e0eb2c2b61bad5dcc94d551d3df6105cfca2d6f7574afa46921d826"} Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.737539 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-zfmmn" event={"ID":"25270671-6926-47af-bb35-43c48308f5fd","Type":"ContainerStarted","Data":"220b7aeb60cbc15235d1f889bdf74d730d141d529b306b5fbe0931b7f7bf7844"} Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.737548 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-zfmmn" event={"ID":"25270671-6926-47af-bb35-43c48308f5fd","Type":"ContainerStarted","Data":"23bbfacf2d9046d8fb13287905b758504067a6a56b8cbf7d92d0bfd2a5da257b"} Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.737627 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.738607 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"a4ce10578c560b858c5b7c6a5c801da2b3193f1a19f7afdf34f6f87c05c27096"} Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.739565 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" event={"ID":"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4","Type":"ContainerStarted","Data":"a18a8850a0b7b69d57a4d37a5a8af68c056f8bc46726618f144eabf8ce4e7d68"} Mar 10 19:04:50 crc kubenswrapper[4861]: I0310 19:04:50.756370 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-zfmmn" podStartSLOduration=1.7563464 podStartE2EDuration="1.7563464s" podCreationTimestamp="2026-03-10 19:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:04:50.755528265 +0000 UTC m=+1034.518964275" watchObservedRunningTime="2026-03-10 19:04:50.7563464 +0000 UTC m=+1034.519782410" Mar 10 19:04:51 crc kubenswrapper[4861]: I0310 19:04:51.258222 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:51 crc kubenswrapper[4861]: I0310 19:04:51.268981 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f8951eba-63cf-4bd6-a2d6-6829c198ac80-memberlist\") pod \"speaker-bvtm5\" (UID: \"f8951eba-63cf-4bd6-a2d6-6829c198ac80\") " pod="metallb-system/speaker-bvtm5" Mar 10 19:04:51 crc kubenswrapper[4861]: I0310 19:04:51.349987 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bvtm5" Mar 10 19:04:51 crc kubenswrapper[4861]: I0310 19:04:51.749426 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bvtm5" event={"ID":"f8951eba-63cf-4bd6-a2d6-6829c198ac80","Type":"ContainerStarted","Data":"d6bca5bcc2a44b3e5d12f59e381961f2298e571a10ae280aa857031b782db9f9"} Mar 10 19:04:51 crc kubenswrapper[4861]: I0310 19:04:51.749476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bvtm5" event={"ID":"f8951eba-63cf-4bd6-a2d6-6829c198ac80","Type":"ContainerStarted","Data":"648002cb3b01dceab94366635edf69e69de62ca9e701d6bf302dbb056019fbd6"} Mar 10 19:04:52 crc kubenswrapper[4861]: I0310 19:04:52.756514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bvtm5" event={"ID":"f8951eba-63cf-4bd6-a2d6-6829c198ac80","Type":"ContainerStarted","Data":"0e953483734f559ae3c5cb91ccbfb7dc673f01d1004824d423c5681a77034414"} Mar 10 19:04:52 crc kubenswrapper[4861]: I0310 19:04:52.756798 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bvtm5" Mar 10 19:04:52 crc kubenswrapper[4861]: I0310 19:04:52.777375 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bvtm5" podStartSLOduration=3.777349616 podStartE2EDuration="3.777349616s" podCreationTimestamp="2026-03-10 19:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:04:52.776042738 +0000 UTC m=+1036.539478708" watchObservedRunningTime="2026-03-10 19:04:52.777349616 +0000 UTC m=+1036.540785616" Mar 10 19:04:57 crc kubenswrapper[4861]: I0310 19:04:57.823686 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" event={"ID":"b7c09d64-564e-4dad-91c3-ecaf75f1a6a4","Type":"ContainerStarted","Data":"3e56571ccc267c329ff17c68a64a9681d1f738b92ecdfcf5ad9a00e4fae939a8"} Mar 10 19:04:57 crc kubenswrapper[4861]: I0310 19:04:57.824486 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:04:57 crc kubenswrapper[4861]: I0310 19:04:57.825980 4861 generic.go:334] "Generic (PLEG): container finished" podID="d76161e5-e488-4365-976f-5487ba4fa265" containerID="b15f520c71df16d9a3306737a77ae22cd12f76bf1aa9895e7511c7d7d7abe0a6" exitCode=0 Mar 10 19:04:57 crc kubenswrapper[4861]: I0310 19:04:57.826023 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerDied","Data":"b15f520c71df16d9a3306737a77ae22cd12f76bf1aa9895e7511c7d7d7abe0a6"} Mar 10 19:04:57 crc kubenswrapper[4861]: I0310 19:04:57.843355 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" podStartSLOduration=2.412112388 podStartE2EDuration="8.843332044s" podCreationTimestamp="2026-03-10 19:04:49 +0000 UTC" firstStartedPulling="2026-03-10 19:04:50.675555636 +0000 UTC m=+1034.438991596" lastFinishedPulling="2026-03-10 19:04:57.106775272 +0000 UTC m=+1040.870211252" observedRunningTime="2026-03-10 19:04:57.840528272 +0000 UTC m=+1041.603964292" watchObservedRunningTime="2026-03-10 19:04:57.843332044 +0000 UTC m=+1041.606768034" Mar 10 19:04:58 crc kubenswrapper[4861]: I0310 19:04:58.837224 4861 generic.go:334] "Generic (PLEG): container finished" podID="d76161e5-e488-4365-976f-5487ba4fa265" containerID="6754b0161b26a3c30f599ce0c0753067a344d954b90ab7fdbf2cb50ff12a817b" exitCode=0 Mar 10 19:04:58 crc kubenswrapper[4861]: I0310 19:04:58.838879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerDied","Data":"6754b0161b26a3c30f599ce0c0753067a344d954b90ab7fdbf2cb50ff12a817b"} Mar 10 19:04:59 crc kubenswrapper[4861]: I0310 19:04:59.845237 4861 generic.go:334] "Generic (PLEG): container finished" podID="d76161e5-e488-4365-976f-5487ba4fa265" containerID="9de43f5fc172155d13c28639e96f118c44e37887096105cdc0de070b2a2ee10f" exitCode=0 Mar 10 19:04:59 crc kubenswrapper[4861]: I0310 19:04:59.845317 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerDied","Data":"9de43f5fc172155d13c28639e96f118c44e37887096105cdc0de070b2a2ee10f"} Mar 10 19:05:00 crc kubenswrapper[4861]: I0310 19:05:00.856490 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"691b3aa0ce5acc82d91b86c37a2d2b73c548a6c2c37b790903800377447dd297"} Mar 10 19:05:00 crc kubenswrapper[4861]: I0310 19:05:00.856892 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"a1cb21e65122814eb3909c9605c1204025c521f7d2d431a7de071e943357c0fe"} Mar 10 19:05:00 crc kubenswrapper[4861]: I0310 19:05:00.856908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"97f35d1e47946d8088b593a814c89d565c32bf3502ba29ef7be8f3fa7dd0621d"} Mar 10 19:05:00 crc kubenswrapper[4861]: I0310 19:05:00.856919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"59002b3ad6f94ab26553bbbfa8f3478ba57b7bb4919221df55d4da19bfdcfd99"} Mar 10 19:05:00 crc kubenswrapper[4861]: I0310 19:05:00.856929 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"2238df06c5515962ae09cd6063692a7da04877c2bea9252f80d24c8d90e8a50c"} Mar 10 19:05:01 crc kubenswrapper[4861]: I0310 19:05:01.355756 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bvtm5" Mar 10 19:05:01 crc kubenswrapper[4861]: I0310 19:05:01.872688 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mxnp2" event={"ID":"d76161e5-e488-4365-976f-5487ba4fa265","Type":"ContainerStarted","Data":"87cce6c9f076cca26810a659b0cc64d32e1b06c5d8a787078fe1f02ccd8405d8"} Mar 10 19:05:01 crc kubenswrapper[4861]: I0310 19:05:01.873488 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:05:01 crc kubenswrapper[4861]: I0310 19:05:01.913383 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mxnp2" podStartSLOduration=5.776606077 podStartE2EDuration="12.913360012s" podCreationTimestamp="2026-03-10 19:04:49 +0000 UTC" firstStartedPulling="2026-03-10 19:04:49.976011683 +0000 UTC m=+1033.739447673" lastFinishedPulling="2026-03-10 19:04:57.112765638 +0000 UTC m=+1040.876201608" observedRunningTime="2026-03-10 19:05:01.910994713 +0000 UTC m=+1045.674430723" watchObservedRunningTime="2026-03-10 19:05:01.913360012 +0000 UTC m=+1045.676795982" Mar 10 19:05:02 crc kubenswrapper[4861]: I0310 19:05:02.989679 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt"] Mar 10 19:05:02 crc kubenswrapper[4861]: I0310 19:05:02.991041 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:02 crc kubenswrapper[4861]: I0310 19:05:02.993976 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.006139 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt"] Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.070533 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.070608 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.070669 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kgs\" (UniqueName: \"kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.172444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.172503 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.172551 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kgs\" (UniqueName: \"kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.173362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.173512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.200294 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kgs\" (UniqueName: \"kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.327645 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.764034 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt"] Mar 10 19:05:03 crc kubenswrapper[4861]: I0310 19:05:03.891808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" event={"ID":"1d40514c-e06d-4c94-a8fb-17a30c1755a8","Type":"ContainerStarted","Data":"4c07807a8e862750935d99a62a2159dfc96bc259998d5b910ada68a1dec8df3a"} Mar 10 19:05:04 crc kubenswrapper[4861]: I0310 19:05:04.791606 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:05:04 crc kubenswrapper[4861]: I0310 19:05:04.851925 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:05:04 crc kubenswrapper[4861]: I0310 19:05:04.901009 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerID="ec59b8df39f3f3b3f9f094c7295bd44ef6a874889cffdf702c70602129992e86" exitCode=0 Mar 10 19:05:04 crc kubenswrapper[4861]: I0310 19:05:04.901166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" event={"ID":"1d40514c-e06d-4c94-a8fb-17a30c1755a8","Type":"ContainerDied","Data":"ec59b8df39f3f3b3f9f094c7295bd44ef6a874889cffdf702c70602129992e86"} Mar 10 19:05:08 crc kubenswrapper[4861]: I0310 19:05:08.944765 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerID="60b240dcd903f1f2126e38f2a7e24212fb5cba669acab77c3e5c442ee633344c" exitCode=0 Mar 10 19:05:08 crc kubenswrapper[4861]: I0310 19:05:08.944906 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" event={"ID":"1d40514c-e06d-4c94-a8fb-17a30c1755a8","Type":"ContainerDied","Data":"60b240dcd903f1f2126e38f2a7e24212fb5cba669acab77c3e5c442ee633344c"} Mar 10 19:05:09 crc kubenswrapper[4861]: I0310 19:05:09.885618 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-zfmmn" Mar 10 19:05:09 crc kubenswrapper[4861]: I0310 19:05:09.966980 4861 generic.go:334] "Generic (PLEG): container finished" podID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerID="f9f1b68b368fdb96acec2a40fe7ababe8f62889ef7350270179da3cfcf21a8da" exitCode=0 Mar 10 19:05:09 crc kubenswrapper[4861]: I0310 19:05:09.967015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" event={"ID":"1d40514c-e06d-4c94-a8fb-17a30c1755a8","Type":"ContainerDied","Data":"f9f1b68b368fdb96acec2a40fe7ababe8f62889ef7350270179da3cfcf21a8da"} Mar 10 19:05:10 crc kubenswrapper[4861]: I0310 19:05:10.407406 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mbb6b" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.438006 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.495965 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util\") pod \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.496033 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7kgs\" (UniqueName: \"kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs\") pod \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.503969 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs" (OuterVolumeSpecName: "kube-api-access-t7kgs") pod "1d40514c-e06d-4c94-a8fb-17a30c1755a8" (UID: "1d40514c-e06d-4c94-a8fb-17a30c1755a8"). InnerVolumeSpecName "kube-api-access-t7kgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.505390 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util" (OuterVolumeSpecName: "util") pod "1d40514c-e06d-4c94-a8fb-17a30c1755a8" (UID: "1d40514c-e06d-4c94-a8fb-17a30c1755a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.597226 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle\") pod \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\" (UID: \"1d40514c-e06d-4c94-a8fb-17a30c1755a8\") " Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.597483 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7kgs\" (UniqueName: \"kubernetes.io/projected/1d40514c-e06d-4c94-a8fb-17a30c1755a8-kube-api-access-t7kgs\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.597499 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-util\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.598361 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle" (OuterVolumeSpecName: "bundle") pod "1d40514c-e06d-4c94-a8fb-17a30c1755a8" (UID: "1d40514c-e06d-4c94-a8fb-17a30c1755a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:05:11 crc kubenswrapper[4861]: I0310 19:05:11.699183 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d40514c-e06d-4c94-a8fb-17a30c1755a8-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:12 crc kubenswrapper[4861]: I0310 19:05:12.130843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" event={"ID":"1d40514c-e06d-4c94-a8fb-17a30c1755a8","Type":"ContainerDied","Data":"4c07807a8e862750935d99a62a2159dfc96bc259998d5b910ada68a1dec8df3a"} Mar 10 19:05:12 crc kubenswrapper[4861]: I0310 19:05:12.130907 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c07807a8e862750935d99a62a2159dfc96bc259998d5b910ada68a1dec8df3a" Mar 10 19:05:12 crc kubenswrapper[4861]: I0310 19:05:12.130984 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.375071 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8"] Mar 10 19:05:15 crc kubenswrapper[4861]: E0310 19:05:15.375648 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="util" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.375658 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="util" Mar 10 19:05:15 crc kubenswrapper[4861]: E0310 19:05:15.375668 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="extract" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.375674 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="extract" Mar 10 19:05:15 crc kubenswrapper[4861]: E0310 19:05:15.375681 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="pull" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.375687 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="pull" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.375889 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d40514c-e06d-4c94-a8fb-17a30c1755a8" containerName="extract" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.376273 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.388356 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.390902 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-wdx9q" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.390932 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.408914 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8"] Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.555114 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c2e37e1-24af-4300-bb19-1a6aa63198e4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.555172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmh94\" (UniqueName: \"kubernetes.io/projected/5c2e37e1-24af-4300-bb19-1a6aa63198e4-kube-api-access-fmh94\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.656185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c2e37e1-24af-4300-bb19-1a6aa63198e4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.656256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmh94\" (UniqueName: \"kubernetes.io/projected/5c2e37e1-24af-4300-bb19-1a6aa63198e4-kube-api-access-fmh94\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.656669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5c2e37e1-24af-4300-bb19-1a6aa63198e4-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.675591 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmh94\" (UniqueName: \"kubernetes.io/projected/5c2e37e1-24af-4300-bb19-1a6aa63198e4-kube-api-access-fmh94\") pod \"cert-manager-operator-controller-manager-66c8bdd694-k7ht8\" (UID: \"5c2e37e1-24af-4300-bb19-1a6aa63198e4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:15 crc kubenswrapper[4861]: I0310 19:05:15.690206 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" Mar 10 19:05:16 crc kubenswrapper[4861]: I0310 19:05:16.133678 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8"] Mar 10 19:05:16 crc kubenswrapper[4861]: W0310 19:05:16.142463 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c2e37e1_24af_4300_bb19_1a6aa63198e4.slice/crio-e1f2e413b7959b0c2c4da91c097905905fa0675c506471d351897f04ff8d48f9 WatchSource:0}: Error finding container e1f2e413b7959b0c2c4da91c097905905fa0675c506471d351897f04ff8d48f9: Status 404 returned error can't find the container with id e1f2e413b7959b0c2c4da91c097905905fa0675c506471d351897f04ff8d48f9 Mar 10 19:05:16 crc kubenswrapper[4861]: I0310 19:05:16.159501 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" event={"ID":"5c2e37e1-24af-4300-bb19-1a6aa63198e4","Type":"ContainerStarted","Data":"e1f2e413b7959b0c2c4da91c097905905fa0675c506471d351897f04ff8d48f9"} Mar 10 19:05:19 crc kubenswrapper[4861]: I0310 19:05:19.797543 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mxnp2" Mar 10 19:05:20 crc kubenswrapper[4861]: I0310 19:05:20.186506 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" event={"ID":"5c2e37e1-24af-4300-bb19-1a6aa63198e4","Type":"ContainerStarted","Data":"9e4694bc044509fc0e70ce384837dd569e145794ba4ce5d830f1ed0b80b698dc"} Mar 10 19:05:20 crc kubenswrapper[4861]: I0310 19:05:20.210770 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-k7ht8" podStartSLOduration=1.7202288719999999 podStartE2EDuration="5.210690814s" podCreationTimestamp="2026-03-10 19:05:15 +0000 UTC" firstStartedPulling="2026-03-10 19:05:16.147007729 +0000 UTC m=+1059.910443699" lastFinishedPulling="2026-03-10 19:05:19.637469641 +0000 UTC m=+1063.400905641" observedRunningTime="2026-03-10 19:05:20.205861992 +0000 UTC m=+1063.969298012" watchObservedRunningTime="2026-03-10 19:05:20.210690814 +0000 UTC m=+1063.974126804" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.627056 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7lqrv"] Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.628984 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.631246 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vhg2j" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.632446 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.632956 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.643163 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7lqrv"] Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.793396 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.793489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjr7\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-kube-api-access-bnjr7\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.895314 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjr7\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-kube-api-access-bnjr7\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.895473 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.922974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.932759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjr7\" (UniqueName: \"kubernetes.io/projected/5a8cb2e6-0b14-4b01-9515-43bba8e79f1b-kube-api-access-bnjr7\") pod \"cert-manager-webhook-6888856db4-7lqrv\" (UID: \"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b\") " pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:23 crc kubenswrapper[4861]: I0310 19:05:23.992922 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:24 crc kubenswrapper[4861]: I0310 19:05:24.554406 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7lqrv"] Mar 10 19:05:25 crc kubenswrapper[4861]: I0310 19:05:25.220667 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" event={"ID":"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b","Type":"ContainerStarted","Data":"172e274f32bcdbf15bc4e2934a891ef3b88ee615eecaa6216e6ff284f558765b"} Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.340616 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.343509 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.363518 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.542022 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6trdn\" (UniqueName: \"kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.542088 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.542108 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.607869 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-v4x8j"] Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.608531 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.610560 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5lb6b" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.620129 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-v4x8j"] Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643200 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643248 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7jh\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-kube-api-access-7h7jh\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643291 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6trdn\" (UniqueName: \"kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643345 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.643917 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.644098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.659619 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6trdn\" (UniqueName: \"kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn\") pod \"community-operators-pl7wf\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.663130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.744338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.744389 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7jh\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-kube-api-access-7h7jh\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.784099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7jh\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-kube-api-access-7h7jh\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.786126 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7207043-ee01-49f4-b006-fa5a3a671508-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-v4x8j\" (UID: \"b7207043-ee01-49f4-b006-fa5a3a671508\") " pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:26 crc kubenswrapper[4861]: I0310 19:05:26.929183 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" Mar 10 19:05:27 crc kubenswrapper[4861]: I0310 19:05:27.164495 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:27 crc kubenswrapper[4861]: W0310 19:05:27.183803 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb01363e_426e_419f_8c22_af7ccc4aecd5.slice/crio-c2b8743156264cd02db5cf289b9621fef8793b8e9f226607eeb8b215817f39f8 WatchSource:0}: Error finding container c2b8743156264cd02db5cf289b9621fef8793b8e9f226607eeb8b215817f39f8: Status 404 returned error can't find the container with id c2b8743156264cd02db5cf289b9621fef8793b8e9f226607eeb8b215817f39f8 Mar 10 19:05:27 crc kubenswrapper[4861]: I0310 19:05:27.232235 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerStarted","Data":"c2b8743156264cd02db5cf289b9621fef8793b8e9f226607eeb8b215817f39f8"} Mar 10 19:05:27 crc kubenswrapper[4861]: I0310 19:05:27.344814 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-v4x8j"] Mar 10 19:05:27 crc kubenswrapper[4861]: W0310 19:05:27.416257 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7207043_ee01_49f4_b006_fa5a3a671508.slice/crio-79dcaf3f36c0694d0e774664db9d0222ead7713f7174469c5b16d65529f23bbc WatchSource:0}: Error finding container 79dcaf3f36c0694d0e774664db9d0222ead7713f7174469c5b16d65529f23bbc: Status 404 returned error can't find the container with id 79dcaf3f36c0694d0e774664db9d0222ead7713f7174469c5b16d65529f23bbc Mar 10 19:05:28 crc kubenswrapper[4861]: I0310 19:05:28.241435 4861 generic.go:334] "Generic (PLEG): container finished" podID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerID="376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de" exitCode=0 Mar 10 19:05:28 crc kubenswrapper[4861]: I0310 19:05:28.241622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerDied","Data":"376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de"} Mar 10 19:05:28 crc kubenswrapper[4861]: I0310 19:05:28.243211 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" event={"ID":"b7207043-ee01-49f4-b006-fa5a3a671508","Type":"ContainerStarted","Data":"79dcaf3f36c0694d0e774664db9d0222ead7713f7174469c5b16d65529f23bbc"} Mar 10 19:05:30 crc kubenswrapper[4861]: I0310 19:05:30.257575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" event={"ID":"b7207043-ee01-49f4-b006-fa5a3a671508","Type":"ContainerStarted","Data":"f9f9e1743bf05eb4bb8699abe65347f74a4923a684802144f217b95e54131810"} Mar 10 19:05:30 crc kubenswrapper[4861]: I0310 19:05:30.259257 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" event={"ID":"5a8cb2e6-0b14-4b01-9515-43bba8e79f1b","Type":"ContainerStarted","Data":"7fbf9d2e7c803252298a092fa37e5b7c8351679efaf5182077187e36b53aefaa"} Mar 10 19:05:30 crc kubenswrapper[4861]: I0310 19:05:30.260251 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:30 crc kubenswrapper[4861]: I0310 19:05:30.278536 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-v4x8j" podStartSLOduration=1.983194771 podStartE2EDuration="4.27851914s" podCreationTimestamp="2026-03-10 19:05:26 +0000 UTC" firstStartedPulling="2026-03-10 19:05:27.41823546 +0000 UTC m=+1071.181671430" lastFinishedPulling="2026-03-10 19:05:29.713559799 +0000 UTC m=+1073.476995799" observedRunningTime="2026-03-10 19:05:30.276260394 +0000 UTC m=+1074.039696384" watchObservedRunningTime="2026-03-10 19:05:30.27851914 +0000 UTC m=+1074.041955140" Mar 10 19:05:30 crc kubenswrapper[4861]: I0310 19:05:30.304930 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" podStartSLOduration=2.154851403 podStartE2EDuration="7.304899259s" podCreationTimestamp="2026-03-10 19:05:23 +0000 UTC" firstStartedPulling="2026-03-10 19:05:24.558520877 +0000 UTC m=+1068.321956847" lastFinishedPulling="2026-03-10 19:05:29.708568743 +0000 UTC m=+1073.472004703" observedRunningTime="2026-03-10 19:05:30.301644394 +0000 UTC m=+1074.065080394" watchObservedRunningTime="2026-03-10 19:05:30.304899259 +0000 UTC m=+1074.068335259" Mar 10 19:05:31 crc kubenswrapper[4861]: I0310 19:05:31.271216 4861 generic.go:334] "Generic (PLEG): container finished" podID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerID="739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2" exitCode=0 Mar 10 19:05:31 crc kubenswrapper[4861]: I0310 19:05:31.271296 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerDied","Data":"739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2"} Mar 10 19:05:32 crc kubenswrapper[4861]: I0310 19:05:32.283692 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerStarted","Data":"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc"} Mar 10 19:05:32 crc kubenswrapper[4861]: I0310 19:05:32.313565 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pl7wf" podStartSLOduration=4.120213567 podStartE2EDuration="6.313540534s" podCreationTimestamp="2026-03-10 19:05:26 +0000 UTC" firstStartedPulling="2026-03-10 19:05:29.623458214 +0000 UTC m=+1073.386894184" lastFinishedPulling="2026-03-10 19:05:31.816785161 +0000 UTC m=+1075.580221151" observedRunningTime="2026-03-10 19:05:32.309636671 +0000 UTC m=+1076.073072671" watchObservedRunningTime="2026-03-10 19:05:32.313540534 +0000 UTC m=+1076.076976524" Mar 10 19:05:36 crc kubenswrapper[4861]: I0310 19:05:36.664182 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:36 crc kubenswrapper[4861]: I0310 19:05:36.665991 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:36 crc kubenswrapper[4861]: I0310 19:05:36.731671 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:37 crc kubenswrapper[4861]: I0310 19:05:37.395884 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:37 crc kubenswrapper[4861]: I0310 19:05:37.467989 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:38 crc kubenswrapper[4861]: I0310 19:05:38.996965 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-7lqrv" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.334680 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pl7wf" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="registry-server" containerID="cri-o://c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc" gracePeriod=2 Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.780318 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.872061 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content\") pod \"eb01363e-426e-419f-8c22-af7ccc4aecd5\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.872155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6trdn\" (UniqueName: \"kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn\") pod \"eb01363e-426e-419f-8c22-af7ccc4aecd5\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.872185 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities\") pod \"eb01363e-426e-419f-8c22-af7ccc4aecd5\" (UID: \"eb01363e-426e-419f-8c22-af7ccc4aecd5\") " Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.873742 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities" (OuterVolumeSpecName: "utilities") pod "eb01363e-426e-419f-8c22-af7ccc4aecd5" (UID: "eb01363e-426e-419f-8c22-af7ccc4aecd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.881297 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn" (OuterVolumeSpecName: "kube-api-access-6trdn") pod "eb01363e-426e-419f-8c22-af7ccc4aecd5" (UID: "eb01363e-426e-419f-8c22-af7ccc4aecd5"). InnerVolumeSpecName "kube-api-access-6trdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.928389 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb01363e-426e-419f-8c22-af7ccc4aecd5" (UID: "eb01363e-426e-419f-8c22-af7ccc4aecd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.974480 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.974534 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6trdn\" (UniqueName: \"kubernetes.io/projected/eb01363e-426e-419f-8c22-af7ccc4aecd5-kube-api-access-6trdn\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:39 crc kubenswrapper[4861]: I0310 19:05:39.974555 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb01363e-426e-419f-8c22-af7ccc4aecd5-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.346420 4861 generic.go:334] "Generic (PLEG): container finished" podID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerID="c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc" exitCode=0 Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.346485 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerDied","Data":"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc"} Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.346534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pl7wf" event={"ID":"eb01363e-426e-419f-8c22-af7ccc4aecd5","Type":"ContainerDied","Data":"c2b8743156264cd02db5cf289b9621fef8793b8e9f226607eeb8b215817f39f8"} Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.346563 4861 scope.go:117] "RemoveContainer" containerID="c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.348198 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pl7wf" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.374450 4861 scope.go:117] "RemoveContainer" containerID="739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.408382 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.422371 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pl7wf"] Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.429042 4861 scope.go:117] "RemoveContainer" containerID="376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.475052 4861 scope.go:117] "RemoveContainer" containerID="c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc" Mar 10 19:05:40 crc kubenswrapper[4861]: E0310 19:05:40.475778 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc\": container with ID starting with c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc not found: ID does not exist" containerID="c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.475861 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc"} err="failed to get container status \"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc\": rpc error: code = NotFound desc = could not find container \"c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc\": container with ID starting with c119ee04d6943bc31137a1e0e5a3328439ce76f1e0711d68cbe4df12952047bc not found: ID does not exist" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.475929 4861 scope.go:117] "RemoveContainer" containerID="739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2" Mar 10 19:05:40 crc kubenswrapper[4861]: E0310 19:05:40.476452 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2\": container with ID starting with 739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2 not found: ID does not exist" containerID="739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.476514 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2"} err="failed to get container status \"739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2\": rpc error: code = NotFound desc = could not find container \"739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2\": container with ID starting with 739b27294383679661dfb1d4ed0f346c0a030e52d09d5c20943559e3ae2604e2 not found: ID does not exist" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.476556 4861 scope.go:117] "RemoveContainer" containerID="376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de" Mar 10 19:05:40 crc kubenswrapper[4861]: E0310 19:05:40.477042 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de\": container with ID starting with 376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de not found: ID does not exist" containerID="376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.477151 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de"} err="failed to get container status \"376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de\": rpc error: code = NotFound desc = could not find container \"376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de\": container with ID starting with 376e6c05c90a36c12049142e0bc4edc21b395f75571325640d587dcbfad8f7de not found: ID does not exist" Mar 10 19:05:40 crc kubenswrapper[4861]: I0310 19:05:40.971142 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" path="/var/lib/kubelet/pods/eb01363e-426e-419f-8c22-af7ccc4aecd5/volumes" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.550150 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-hkkq9"] Mar 10 19:05:42 crc kubenswrapper[4861]: E0310 19:05:42.551031 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="registry-server" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.551063 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="registry-server" Mar 10 19:05:42 crc kubenswrapper[4861]: E0310 19:05:42.551094 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="extract-content" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.551110 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="extract-content" Mar 10 19:05:42 crc kubenswrapper[4861]: E0310 19:05:42.551131 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="extract-utilities" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.551146 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="extract-utilities" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.551440 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb01363e-426e-419f-8c22-af7ccc4aecd5" containerName="registry-server" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.552404 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.556171 4861 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vhxb7" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.564286 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hkkq9"] Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.618407 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlxt\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-kube-api-access-5mlxt\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.618607 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-bound-sa-token\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.719955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-bound-sa-token\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.720141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlxt\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-kube-api-access-5mlxt\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.772446 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlxt\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-kube-api-access-5mlxt\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.773457 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cbe57a-85b5-4996-b7ad-c43119116d78-bound-sa-token\") pod \"cert-manager-545d4d4674-hkkq9\" (UID: \"17cbe57a-85b5-4996-b7ad-c43119116d78\") " pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:42 crc kubenswrapper[4861]: I0310 19:05:42.882366 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-hkkq9" Mar 10 19:05:43 crc kubenswrapper[4861]: I0310 19:05:43.189399 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-hkkq9"] Mar 10 19:05:43 crc kubenswrapper[4861]: I0310 19:05:43.374894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hkkq9" event={"ID":"17cbe57a-85b5-4996-b7ad-c43119116d78","Type":"ContainerStarted","Data":"5b562db8767102cc5a0d056f7022bb7aa9203ee946ad7e4c8f3b7a269c764f20"} Mar 10 19:05:43 crc kubenswrapper[4861]: I0310 19:05:43.375181 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-hkkq9" event={"ID":"17cbe57a-85b5-4996-b7ad-c43119116d78","Type":"ContainerStarted","Data":"71cb21ba95958fdeecd53c246c2b64e5369a36634cdbebb72dc058e5baaabbbb"} Mar 10 19:05:43 crc kubenswrapper[4861]: I0310 19:05:43.399893 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-hkkq9" podStartSLOduration=1.399875598 podStartE2EDuration="1.399875598s" podCreationTimestamp="2026-03-10 19:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:05:43.395768598 +0000 UTC m=+1087.159204588" watchObservedRunningTime="2026-03-10 19:05:43.399875598 +0000 UTC m=+1087.163311558" Mar 10 19:05:51 crc kubenswrapper[4861]: I0310 19:05:51.992359 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:05:51 crc kubenswrapper[4861]: I0310 19:05:51.993037 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.673478 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.674343 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.676980 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.677361 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gwrhb" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.677592 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.702623 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.769472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5d5\" (UniqueName: \"kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5\") pod \"openstack-operator-index-v8wch\" (UID: \"da6cce3a-8d39-4a9b-b2fe-29e53724b37a\") " pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.871242 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5d5\" (UniqueName: \"kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5\") pod \"openstack-operator-index-v8wch\" (UID: \"da6cce3a-8d39-4a9b-b2fe-29e53724b37a\") " pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:52 crc kubenswrapper[4861]: I0310 19:05:52.890804 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5d5\" (UniqueName: \"kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5\") pod \"openstack-operator-index-v8wch\" (UID: \"da6cce3a-8d39-4a9b-b2fe-29e53724b37a\") " pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:53 crc kubenswrapper[4861]: I0310 19:05:53.005302 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:53 crc kubenswrapper[4861]: I0310 19:05:53.247675 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:53 crc kubenswrapper[4861]: I0310 19:05:53.456596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8wch" event={"ID":"da6cce3a-8d39-4a9b-b2fe-29e53724b37a","Type":"ContainerStarted","Data":"a05cfd80f599d7b9f5bab71ddf909e03e44150110b101d72a49f8cd83e2ff2a2"} Mar 10 19:05:55 crc kubenswrapper[4861]: I0310 19:05:55.493452 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8wch" event={"ID":"da6cce3a-8d39-4a9b-b2fe-29e53724b37a","Type":"ContainerStarted","Data":"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0"} Mar 10 19:05:55 crc kubenswrapper[4861]: I0310 19:05:55.522140 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v8wch" podStartSLOduration=2.273137163 podStartE2EDuration="3.522115664s" podCreationTimestamp="2026-03-10 19:05:52 +0000 UTC" firstStartedPulling="2026-03-10 19:05:53.251616448 +0000 UTC m=+1097.015052418" lastFinishedPulling="2026-03-10 19:05:54.500594919 +0000 UTC m=+1098.264030919" observedRunningTime="2026-03-10 19:05:55.514939594 +0000 UTC m=+1099.278375594" watchObservedRunningTime="2026-03-10 19:05:55.522115664 +0000 UTC m=+1099.285551664" Mar 10 19:05:55 crc kubenswrapper[4861]: I0310 19:05:55.832604 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.441457 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kfhjj"] Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.442657 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.449978 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kfhjj"] Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.531521 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79gt\" (UniqueName: \"kubernetes.io/projected/0f595f08-5e94-415c-929c-d8c076efa590-kube-api-access-n79gt\") pod \"openstack-operator-index-kfhjj\" (UID: \"0f595f08-5e94-415c-929c-d8c076efa590\") " pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.633080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n79gt\" (UniqueName: \"kubernetes.io/projected/0f595f08-5e94-415c-929c-d8c076efa590-kube-api-access-n79gt\") pod \"openstack-operator-index-kfhjj\" (UID: \"0f595f08-5e94-415c-929c-d8c076efa590\") " pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.664144 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n79gt\" (UniqueName: \"kubernetes.io/projected/0f595f08-5e94-415c-929c-d8c076efa590-kube-api-access-n79gt\") pod \"openstack-operator-index-kfhjj\" (UID: \"0f595f08-5e94-415c-929c-d8c076efa590\") " pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:05:56 crc kubenswrapper[4861]: I0310 19:05:56.777795 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:05:57 crc kubenswrapper[4861]: I0310 19:05:57.265197 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kfhjj"] Mar 10 19:05:57 crc kubenswrapper[4861]: W0310 19:05:57.272736 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f595f08_5e94_415c_929c_d8c076efa590.slice/crio-d3db5a926ed7fe67aa7d11dc01a6c0c175fe3971a7e4d44a3bdb11b2b602c4a3 WatchSource:0}: Error finding container d3db5a926ed7fe67aa7d11dc01a6c0c175fe3971a7e4d44a3bdb11b2b602c4a3: Status 404 returned error can't find the container with id d3db5a926ed7fe67aa7d11dc01a6c0c175fe3971a7e4d44a3bdb11b2b602c4a3 Mar 10 19:05:57 crc kubenswrapper[4861]: I0310 19:05:57.511673 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v8wch" podUID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" containerName="registry-server" containerID="cri-o://8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0" gracePeriod=2 Mar 10 19:05:57 crc kubenswrapper[4861]: I0310 19:05:57.512841 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhjj" event={"ID":"0f595f08-5e94-415c-929c-d8c076efa590","Type":"ContainerStarted","Data":"d3db5a926ed7fe67aa7d11dc01a6c0c175fe3971a7e4d44a3bdb11b2b602c4a3"} Mar 10 19:05:57 crc kubenswrapper[4861]: I0310 19:05:57.944307 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.056425 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5d5\" (UniqueName: \"kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5\") pod \"da6cce3a-8d39-4a9b-b2fe-29e53724b37a\" (UID: \"da6cce3a-8d39-4a9b-b2fe-29e53724b37a\") " Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.069024 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5" (OuterVolumeSpecName: "kube-api-access-wx5d5") pod "da6cce3a-8d39-4a9b-b2fe-29e53724b37a" (UID: "da6cce3a-8d39-4a9b-b2fe-29e53724b37a"). InnerVolumeSpecName "kube-api-access-wx5d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.158626 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5d5\" (UniqueName: \"kubernetes.io/projected/da6cce3a-8d39-4a9b-b2fe-29e53724b37a-kube-api-access-wx5d5\") on node \"crc\" DevicePath \"\"" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.523476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kfhjj" event={"ID":"0f595f08-5e94-415c-929c-d8c076efa590","Type":"ContainerStarted","Data":"32c379e2fe22d3345cabd00bee8821bce11161fde233ade1c379018f3abcb653"} Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.529240 4861 generic.go:334] "Generic (PLEG): container finished" podID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" containerID="8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0" exitCode=0 Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.529309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8wch" event={"ID":"da6cce3a-8d39-4a9b-b2fe-29e53724b37a","Type":"ContainerDied","Data":"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0"} Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.529347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8wch" event={"ID":"da6cce3a-8d39-4a9b-b2fe-29e53724b37a","Type":"ContainerDied","Data":"a05cfd80f599d7b9f5bab71ddf909e03e44150110b101d72a49f8cd83e2ff2a2"} Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.529377 4861 scope.go:117] "RemoveContainer" containerID="8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.529891 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8wch" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.559559 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kfhjj" podStartSLOduration=2.142785215 podStartE2EDuration="2.559531304s" podCreationTimestamp="2026-03-10 19:05:56 +0000 UTC" firstStartedPulling="2026-03-10 19:05:57.276255484 +0000 UTC m=+1101.039691484" lastFinishedPulling="2026-03-10 19:05:57.693001583 +0000 UTC m=+1101.456437573" observedRunningTime="2026-03-10 19:05:58.549509236 +0000 UTC m=+1102.312945286" watchObservedRunningTime="2026-03-10 19:05:58.559531304 +0000 UTC m=+1102.322967294" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.565475 4861 scope.go:117] "RemoveContainer" containerID="8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0" Mar 10 19:05:58 crc kubenswrapper[4861]: E0310 19:05:58.568368 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0\": container with ID starting with 8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0 not found: ID does not exist" containerID="8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.568448 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0"} err="failed to get container status \"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0\": rpc error: code = NotFound desc = could not find container \"8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0\": container with ID starting with 8bb537fac47c4a94799908e2c62b385728ba5206e7df843b8a7108b93b730ff0 not found: ID does not exist" Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.588003 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.593239 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-v8wch"] Mar 10 19:05:58 crc kubenswrapper[4861]: I0310 19:05:58.970445 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" path="/var/lib/kubelet/pods/da6cce3a-8d39-4a9b-b2fe-29e53724b37a/volumes" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.136377 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552826-tw76q"] Mar 10 19:06:00 crc kubenswrapper[4861]: E0310 19:06:00.137059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" containerName="registry-server" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.137081 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" containerName="registry-server" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.137299 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6cce3a-8d39-4a9b-b2fe-29e53724b37a" containerName="registry-server" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.137894 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.143637 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.144199 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.144547 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.152416 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552826-tw76q"] Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.187769 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjkj7\" (UniqueName: \"kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7\") pod \"auto-csr-approver-29552826-tw76q\" (UID: \"68aea368-20a3-44da-9d77-eafce380801e\") " pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.289347 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjkj7\" (UniqueName: \"kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7\") pod \"auto-csr-approver-29552826-tw76q\" (UID: \"68aea368-20a3-44da-9d77-eafce380801e\") " pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.315151 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjkj7\" (UniqueName: \"kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7\") pod \"auto-csr-approver-29552826-tw76q\" (UID: \"68aea368-20a3-44da-9d77-eafce380801e\") " pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:00 crc kubenswrapper[4861]: I0310 19:06:00.503246 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:01 crc kubenswrapper[4861]: I0310 19:06:01.003066 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552826-tw76q"] Mar 10 19:06:01 crc kubenswrapper[4861]: I0310 19:06:01.558842 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552826-tw76q" event={"ID":"68aea368-20a3-44da-9d77-eafce380801e","Type":"ContainerStarted","Data":"701fa1e780c3a61f1d03f70225a80343697c7c4e1cddc06b34ba229e7b733731"} Mar 10 19:06:06 crc kubenswrapper[4861]: I0310 19:06:06.778238 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:06:06 crc kubenswrapper[4861]: I0310 19:06:06.778858 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:06:06 crc kubenswrapper[4861]: I0310 19:06:06.840889 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:06:07 crc kubenswrapper[4861]: I0310 19:06:07.616802 4861 generic.go:334] "Generic (PLEG): container finished" podID="68aea368-20a3-44da-9d77-eafce380801e" containerID="bf80b003e3d2a7edc4ed173b54216a47f70559f978c7bb9fbc3f53c42142760f" exitCode=0 Mar 10 19:06:07 crc kubenswrapper[4861]: I0310 19:06:07.616910 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552826-tw76q" event={"ID":"68aea368-20a3-44da-9d77-eafce380801e","Type":"ContainerDied","Data":"bf80b003e3d2a7edc4ed173b54216a47f70559f978c7bb9fbc3f53c42142760f"} Mar 10 19:06:07 crc kubenswrapper[4861]: I0310 19:06:07.661150 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kfhjj" Mar 10 19:06:08 crc kubenswrapper[4861]: I0310 19:06:08.968409 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.136753 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjkj7\" (UniqueName: \"kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7\") pod \"68aea368-20a3-44da-9d77-eafce380801e\" (UID: \"68aea368-20a3-44da-9d77-eafce380801e\") " Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.145956 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7" (OuterVolumeSpecName: "kube-api-access-vjkj7") pod "68aea368-20a3-44da-9d77-eafce380801e" (UID: "68aea368-20a3-44da-9d77-eafce380801e"). InnerVolumeSpecName "kube-api-access-vjkj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.238979 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjkj7\" (UniqueName: \"kubernetes.io/projected/68aea368-20a3-44da-9d77-eafce380801e-kube-api-access-vjkj7\") on node \"crc\" DevicePath \"\"" Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.636054 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552826-tw76q" event={"ID":"68aea368-20a3-44da-9d77-eafce380801e","Type":"ContainerDied","Data":"701fa1e780c3a61f1d03f70225a80343697c7c4e1cddc06b34ba229e7b733731"} Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.636419 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="701fa1e780c3a61f1d03f70225a80343697c7c4e1cddc06b34ba229e7b733731" Mar 10 19:06:09 crc kubenswrapper[4861]: I0310 19:06:09.636134 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552826-tw76q" Mar 10 19:06:10 crc kubenswrapper[4861]: I0310 19:06:10.032527 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552820-xgtf5"] Mar 10 19:06:10 crc kubenswrapper[4861]: I0310 19:06:10.041286 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552820-xgtf5"] Mar 10 19:06:10 crc kubenswrapper[4861]: I0310 19:06:10.967780 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcaa5c6-e4b8-474e-81c9-784dd89f3568" path="/var/lib/kubelet/pods/abcaa5c6-e4b8-474e-81c9-784dd89f3568/volumes" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.313671 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b"] Mar 10 19:06:13 crc kubenswrapper[4861]: E0310 19:06:13.314419 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68aea368-20a3-44da-9d77-eafce380801e" containerName="oc" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.314441 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="68aea368-20a3-44da-9d77-eafce380801e" containerName="oc" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.314667 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="68aea368-20a3-44da-9d77-eafce380801e" containerName="oc" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.316132 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.322609 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hdwc6" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.336283 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b"] Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.452789 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.453540 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bt8\" (UniqueName: \"kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.453771 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.554682 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.554831 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bt8\" (UniqueName: \"kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.554906 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.555838 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.555871 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.588276 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bt8\" (UniqueName: \"kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:13 crc kubenswrapper[4861]: I0310 19:06:13.651053 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:14 crc kubenswrapper[4861]: I0310 19:06:14.152506 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b"] Mar 10 19:06:14 crc kubenswrapper[4861]: I0310 19:06:14.858230 4861 generic.go:334] "Generic (PLEG): container finished" podID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerID="55fa112c14cde29284380b33f9c92869731ca7433d7ac69805df7107f2a2a1d1" exitCode=0 Mar 10 19:06:14 crc kubenswrapper[4861]: I0310 19:06:14.858340 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerDied","Data":"55fa112c14cde29284380b33f9c92869731ca7433d7ac69805df7107f2a2a1d1"} Mar 10 19:06:14 crc kubenswrapper[4861]: I0310 19:06:14.858922 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerStarted","Data":"edb8193148018ff6a25b73ea1627a89e063c21129b178fd128a72800b44ab3b0"} Mar 10 19:06:15 crc kubenswrapper[4861]: I0310 19:06:15.870161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerStarted","Data":"aff0e81837a7a0b7bedbbc834f73405a7130e985af0f8607e46a73e8f01822c1"} Mar 10 19:06:16 crc kubenswrapper[4861]: I0310 19:06:16.882008 4861 generic.go:334] "Generic (PLEG): container finished" podID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerID="aff0e81837a7a0b7bedbbc834f73405a7130e985af0f8607e46a73e8f01822c1" exitCode=0 Mar 10 19:06:16 crc kubenswrapper[4861]: I0310 19:06:16.882068 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerDied","Data":"aff0e81837a7a0b7bedbbc834f73405a7130e985af0f8607e46a73e8f01822c1"} Mar 10 19:06:17 crc kubenswrapper[4861]: I0310 19:06:17.906945 4861 generic.go:334] "Generic (PLEG): container finished" podID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerID="91bb5c8a65c45a52b3976bf8e9c188cf6473718d1511432eb3ed1d1e7ab481c7" exitCode=0 Mar 10 19:06:17 crc kubenswrapper[4861]: I0310 19:06:17.907288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerDied","Data":"91bb5c8a65c45a52b3976bf8e9c188cf6473718d1511432eb3ed1d1e7ab481c7"} Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.250222 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.470778 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle\") pod \"798cdc73-2626-4da6-8104-ba0fe4ec829d\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.471248 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util\") pod \"798cdc73-2626-4da6-8104-ba0fe4ec829d\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.471339 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7bt8\" (UniqueName: \"kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8\") pod \"798cdc73-2626-4da6-8104-ba0fe4ec829d\" (UID: \"798cdc73-2626-4da6-8104-ba0fe4ec829d\") " Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.472360 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle" (OuterVolumeSpecName: "bundle") pod "798cdc73-2626-4da6-8104-ba0fe4ec829d" (UID: "798cdc73-2626-4da6-8104-ba0fe4ec829d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.480408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8" (OuterVolumeSpecName: "kube-api-access-d7bt8") pod "798cdc73-2626-4da6-8104-ba0fe4ec829d" (UID: "798cdc73-2626-4da6-8104-ba0fe4ec829d"). InnerVolumeSpecName "kube-api-access-d7bt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.503208 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util" (OuterVolumeSpecName: "util") pod "798cdc73-2626-4da6-8104-ba0fe4ec829d" (UID: "798cdc73-2626-4da6-8104-ba0fe4ec829d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.573178 4861 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-util\") on node \"crc\" DevicePath \"\"" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.573228 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7bt8\" (UniqueName: \"kubernetes.io/projected/798cdc73-2626-4da6-8104-ba0fe4ec829d-kube-api-access-d7bt8\") on node \"crc\" DevicePath \"\"" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.573248 4861 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/798cdc73-2626-4da6-8104-ba0fe4ec829d-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.931128 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" event={"ID":"798cdc73-2626-4da6-8104-ba0fe4ec829d","Type":"ContainerDied","Data":"edb8193148018ff6a25b73ea1627a89e063c21129b178fd128a72800b44ab3b0"} Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.931195 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb8193148018ff6a25b73ea1627a89e063c21129b178fd128a72800b44ab3b0" Mar 10 19:06:19 crc kubenswrapper[4861]: I0310 19:06:19.931269 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b" Mar 10 19:06:21 crc kubenswrapper[4861]: I0310 19:06:21.992796 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:06:21 crc kubenswrapper[4861]: I0310 19:06:21.993260 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.222844 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4"] Mar 10 19:06:26 crc kubenswrapper[4861]: E0310 19:06:26.223314 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="util" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.223326 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="util" Mar 10 19:06:26 crc kubenswrapper[4861]: E0310 19:06:26.223334 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="extract" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.223340 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="extract" Mar 10 19:06:26 crc kubenswrapper[4861]: E0310 19:06:26.223357 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="pull" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.223362 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="pull" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.223461 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="798cdc73-2626-4da6-8104-ba0fe4ec829d" containerName="extract" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.223864 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.226150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-96lc6" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.301751 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4"] Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.376955 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq95x\" (UniqueName: \"kubernetes.io/projected/33055dc2-3f37-47e1-9550-f601f76f9b2a-kube-api-access-dq95x\") pod \"openstack-operator-controller-init-6cf8df7788-wv4t4\" (UID: \"33055dc2-3f37-47e1-9550-f601f76f9b2a\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.478661 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq95x\" (UniqueName: \"kubernetes.io/projected/33055dc2-3f37-47e1-9550-f601f76f9b2a-kube-api-access-dq95x\") pod \"openstack-operator-controller-init-6cf8df7788-wv4t4\" (UID: \"33055dc2-3f37-47e1-9550-f601f76f9b2a\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.508977 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq95x\" (UniqueName: \"kubernetes.io/projected/33055dc2-3f37-47e1-9550-f601f76f9b2a-kube-api-access-dq95x\") pod \"openstack-operator-controller-init-6cf8df7788-wv4t4\" (UID: \"33055dc2-3f37-47e1-9550-f601f76f9b2a\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.537758 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.801478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4"] Mar 10 19:06:26 crc kubenswrapper[4861]: W0310 19:06:26.806363 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33055dc2_3f37_47e1_9550_f601f76f9b2a.slice/crio-56639d5c71e9ac64804c0aa7ab59167d649182db09379ca1376cd021fbcf75a5 WatchSource:0}: Error finding container 56639d5c71e9ac64804c0aa7ab59167d649182db09379ca1376cd021fbcf75a5: Status 404 returned error can't find the container with id 56639d5c71e9ac64804c0aa7ab59167d649182db09379ca1376cd021fbcf75a5 Mar 10 19:06:26 crc kubenswrapper[4861]: I0310 19:06:26.980890 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" event={"ID":"33055dc2-3f37-47e1-9550-f601f76f9b2a","Type":"ContainerStarted","Data":"56639d5c71e9ac64804c0aa7ab59167d649182db09379ca1376cd021fbcf75a5"} Mar 10 19:06:32 crc kubenswrapper[4861]: I0310 19:06:32.017056 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" event={"ID":"33055dc2-3f37-47e1-9550-f601f76f9b2a","Type":"ContainerStarted","Data":"40fd38ddf0f928ed3dfa998409915e6c502af7badff1e00ebae1f6e5b97a5a05"} Mar 10 19:06:32 crc kubenswrapper[4861]: I0310 19:06:32.018514 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:32 crc kubenswrapper[4861]: I0310 19:06:32.060600 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" podStartSLOduration=1.5036801469999999 podStartE2EDuration="6.060582686s" podCreationTimestamp="2026-03-10 19:06:26 +0000 UTC" firstStartedPulling="2026-03-10 19:06:26.80943496 +0000 UTC m=+1130.572870920" lastFinishedPulling="2026-03-10 19:06:31.366337499 +0000 UTC m=+1135.129773459" observedRunningTime="2026-03-10 19:06:32.058929516 +0000 UTC m=+1135.822365486" watchObservedRunningTime="2026-03-10 19:06:32.060582686 +0000 UTC m=+1135.824018656" Mar 10 19:06:36 crc kubenswrapper[4861]: I0310 19:06:36.541766 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wv4t4" Mar 10 19:06:37 crc kubenswrapper[4861]: I0310 19:06:37.891386 4861 scope.go:117] "RemoveContainer" containerID="018765a2efa90214dea88dd37bfb14fd89c01e75371017b4a7e19c2e1ba7124e" Mar 10 19:06:51 crc kubenswrapper[4861]: I0310 19:06:51.992154 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:06:51 crc kubenswrapper[4861]: I0310 19:06:51.992873 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:06:51 crc kubenswrapper[4861]: I0310 19:06:51.992949 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:06:51 crc kubenswrapper[4861]: I0310 19:06:51.994003 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:06:51 crc kubenswrapper[4861]: I0310 19:06:51.994109 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014" gracePeriod=600 Mar 10 19:06:52 crc kubenswrapper[4861]: I0310 19:06:52.181529 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014" exitCode=0 Mar 10 19:06:52 crc kubenswrapper[4861]: I0310 19:06:52.181629 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014"} Mar 10 19:06:52 crc kubenswrapper[4861]: I0310 19:06:52.181789 4861 scope.go:117] "RemoveContainer" containerID="524ff77cbb2ece0094ac12e0d38120bb8410468f37ff11e7edde1dc4c7082951" Mar 10 19:06:53 crc kubenswrapper[4861]: I0310 19:06:53.195064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351"} Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.022325 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.032468 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.043812 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jzzsr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.090781 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.103857 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.105258 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.113357 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.114155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.116853 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.132622 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r25xz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.132938 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rw7nr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.133349 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.134459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xj57\" (UniqueName: \"kubernetes.io/projected/1596b973-4b23-48a6-9924-8e98b7535a61-kube-api-access-4xj57\") pod \"designate-operator-controller-manager-66d56f6ff4-4mh8f\" (UID: \"1596b973-4b23-48a6-9924-8e98b7535a61\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.159913 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.160903 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.168415 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-glzgd" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.176859 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.184950 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.185699 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.188904 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zvqw4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.195027 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.206337 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.207552 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.214167 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.214360 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dg2ls" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.220558 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.221360 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.228980 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mf2wf" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.234049 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.235009 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.235979 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjrv\" (UniqueName: \"kubernetes.io/projected/239d1170-1e62-44e7-a07d-10ca9adfa28e-kube-api-access-7sjrv\") pod \"cinder-operator-controller-manager-984cd4dcf-d7ph4\" (UID: \"239d1170-1e62-44e7-a07d-10ca9adfa28e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.236035 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xj57\" (UniqueName: \"kubernetes.io/projected/1596b973-4b23-48a6-9924-8e98b7535a61-kube-api-access-4xj57\") pod \"designate-operator-controller-manager-66d56f6ff4-4mh8f\" (UID: \"1596b973-4b23-48a6-9924-8e98b7535a61\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.236080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7nxp\" (UniqueName: \"kubernetes.io/projected/54044555-52fe-44c6-9e47-7f2748a8b114-kube-api-access-v7nxp\") pod \"barbican-operator-controller-manager-677bd678f7-rzrtw\" (UID: \"54044555-52fe-44c6-9e47-7f2748a8b114\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.244767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.248132 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xrfz5" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.265501 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.295474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xj57\" (UniqueName: \"kubernetes.io/projected/1596b973-4b23-48a6-9924-8e98b7535a61-kube-api-access-4xj57\") pod \"designate-operator-controller-manager-66d56f6ff4-4mh8f\" (UID: \"1596b973-4b23-48a6-9924-8e98b7535a61\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.303165 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.304158 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.306849 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rcvvl" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.307346 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.320613 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.333293 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.334322 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.337141 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tsbt\" (UniqueName: \"kubernetes.io/projected/56a6874c-f929-4490-a247-52542d4aa8f1-kube-api-access-4tsbt\") pod \"heat-operator-controller-manager-77b6666d85-q76xg\" (UID: \"56a6874c-f929-4490-a247-52542d4aa8f1\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.337189 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjrv\" (UniqueName: \"kubernetes.io/projected/239d1170-1e62-44e7-a07d-10ca9adfa28e-kube-api-access-7sjrv\") pod \"cinder-operator-controller-manager-984cd4dcf-d7ph4\" (UID: \"239d1170-1e62-44e7-a07d-10ca9adfa28e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.337220 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gwl\" (UniqueName: \"kubernetes.io/projected/acdc9485-305f-401f-91bb-749a9e1e3c89-kube-api-access-j7gwl\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.337238 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtkn\" (UniqueName: \"kubernetes.io/projected/94ef6e9b-1270-4992-9d58-902e82b52294-kube-api-access-7rtkn\") pod \"glance-operator-controller-manager-5964f64c48-9bhqz\" (UID: \"94ef6e9b-1270-4992-9d58-902e82b52294\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.337679 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pq9ch" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.338274 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkjs\" (UniqueName: \"kubernetes.io/projected/ce725c26-b020-402f-ad0c-ae44f307e21e-kube-api-access-9kkjs\") pod \"horizon-operator-controller-manager-6d9d6b584d-265hs\" (UID: \"ce725c26-b020-402f-ad0c-ae44f307e21e\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.338307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7nxp\" (UniqueName: \"kubernetes.io/projected/54044555-52fe-44c6-9e47-7f2748a8b114-kube-api-access-v7nxp\") pod \"barbican-operator-controller-manager-677bd678f7-rzrtw\" (UID: \"54044555-52fe-44c6-9e47-7f2748a8b114\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.338325 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.338359 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgc7\" (UniqueName: \"kubernetes.io/projected/16f833b1-3203-4532-a5f4-bc8769cd0932-kube-api-access-ksgc7\") pod \"ironic-operator-controller-manager-6bbb499bbc-bdscg\" (UID: \"16f833b1-3203-4532-a5f4-bc8769cd0932\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.351379 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.356327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjrv\" (UniqueName: \"kubernetes.io/projected/239d1170-1e62-44e7-a07d-10ca9adfa28e-kube-api-access-7sjrv\") pod \"cinder-operator-controller-manager-984cd4dcf-d7ph4\" (UID: \"239d1170-1e62-44e7-a07d-10ca9adfa28e\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.359678 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7nxp\" (UniqueName: \"kubernetes.io/projected/54044555-52fe-44c6-9e47-7f2748a8b114-kube-api-access-v7nxp\") pod \"barbican-operator-controller-manager-677bd678f7-rzrtw\" (UID: \"54044555-52fe-44c6-9e47-7f2748a8b114\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.363096 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.363888 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.371007 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rf5wc" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.371451 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.371479 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.372072 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.375911 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zb74q" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.382490 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.386839 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.387587 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.389305 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jpqmh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.392086 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.392861 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.394156 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-l8tq5" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.395453 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.405594 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.414215 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.422852 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.423885 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.425700 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.426676 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lrvkq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440516 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksgc7\" (UniqueName: \"kubernetes.io/projected/16f833b1-3203-4532-a5f4-bc8769cd0932-kube-api-access-ksgc7\") pod \"ironic-operator-controller-manager-6bbb499bbc-bdscg\" (UID: \"16f833b1-3203-4532-a5f4-bc8769cd0932\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440571 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl47n\" (UniqueName: \"kubernetes.io/projected/082164ca-006f-4217-a664-a81f24fb7f9c-kube-api-access-wl47n\") pod \"keystone-operator-controller-manager-684f77d66d-mf7c8\" (UID: \"082164ca-006f-4217-a664-a81f24fb7f9c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tsbt\" (UniqueName: \"kubernetes.io/projected/56a6874c-f929-4490-a247-52542d4aa8f1-kube-api-access-4tsbt\") pod \"heat-operator-controller-manager-77b6666d85-q76xg\" (UID: \"56a6874c-f929-4490-a247-52542d4aa8f1\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440622 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gwl\" (UniqueName: \"kubernetes.io/projected/acdc9485-305f-401f-91bb-749a9e1e3c89-kube-api-access-j7gwl\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440642 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtkn\" (UniqueName: \"kubernetes.io/projected/94ef6e9b-1270-4992-9d58-902e82b52294-kube-api-access-7rtkn\") pod \"glance-operator-controller-manager-5964f64c48-9bhqz\" (UID: \"94ef6e9b-1270-4992-9d58-902e82b52294\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.440664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6lz\" (UniqueName: \"kubernetes.io/projected/ee7bce9d-5057-4eb7-af07-af66a5bc7473-kube-api-access-qv6lz\") pod \"manila-operator-controller-manager-68f45f9d9f-864bh\" (UID: \"ee7bce9d-5057-4eb7-af07-af66a5bc7473\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.441376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kkjs\" (UniqueName: \"kubernetes.io/projected/ce725c26-b020-402f-ad0c-ae44f307e21e-kube-api-access-9kkjs\") pod \"horizon-operator-controller-manager-6d9d6b584d-265hs\" (UID: \"ce725c26-b020-402f-ad0c-ae44f307e21e\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.441431 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.441995 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.442042 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:15.942026159 +0000 UTC m=+1179.705462109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.441494 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjb9\" (UniqueName: \"kubernetes.io/projected/8b85720c-9adc-46f0-835b-3a1709df2126-kube-api-access-zgjb9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wvkvr\" (UID: \"8b85720c-9adc-46f0-835b-3a1709df2126\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.450833 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.452583 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.454697 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ml2tx" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.455813 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.464198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gwl\" (UniqueName: \"kubernetes.io/projected/acdc9485-305f-401f-91bb-749a9e1e3c89-kube-api-access-j7gwl\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.464451 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kkjs\" (UniqueName: \"kubernetes.io/projected/ce725c26-b020-402f-ad0c-ae44f307e21e-kube-api-access-9kkjs\") pod \"horizon-operator-controller-manager-6d9d6b584d-265hs\" (UID: \"ce725c26-b020-402f-ad0c-ae44f307e21e\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.464928 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.468228 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtkn\" (UniqueName: \"kubernetes.io/projected/94ef6e9b-1270-4992-9d58-902e82b52294-kube-api-access-7rtkn\") pod \"glance-operator-controller-manager-5964f64c48-9bhqz\" (UID: \"94ef6e9b-1270-4992-9d58-902e82b52294\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.475449 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksgc7\" (UniqueName: \"kubernetes.io/projected/16f833b1-3203-4532-a5f4-bc8769cd0932-kube-api-access-ksgc7\") pod \"ironic-operator-controller-manager-6bbb499bbc-bdscg\" (UID: \"16f833b1-3203-4532-a5f4-bc8769cd0932\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.476364 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.476470 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.476483 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tsbt\" (UniqueName: \"kubernetes.io/projected/56a6874c-f929-4490-a247-52542d4aa8f1-kube-api-access-4tsbt\") pod \"heat-operator-controller-manager-77b6666d85-q76xg\" (UID: \"56a6874c-f929-4490-a247-52542d4aa8f1\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.480511 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-t8cnn" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.482335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.483680 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.497941 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.502547 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.515896 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.519172 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.519955 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.523374 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-vpqm5" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.524638 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl47n\" (UniqueName: \"kubernetes.io/projected/082164ca-006f-4217-a664-a81f24fb7f9c-kube-api-access-wl47n\") pod \"keystone-operator-controller-manager-684f77d66d-mf7c8\" (UID: \"082164ca-006f-4217-a664-a81f24fb7f9c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547608 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547638 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbps\" (UniqueName: \"kubernetes.io/projected/e05b8481-3bf6-416e-a404-61ce227c2350-kube-api-access-fzbps\") pod \"mariadb-operator-controller-manager-658d4cdd5-qxj54\" (UID: \"e05b8481-3bf6-416e-a404-61ce227c2350\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6lz\" (UniqueName: \"kubernetes.io/projected/ee7bce9d-5057-4eb7-af07-af66a5bc7473-kube-api-access-qv6lz\") pod \"manila-operator-controller-manager-68f45f9d9f-864bh\" (UID: \"ee7bce9d-5057-4eb7-af07-af66a5bc7473\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjjm\" (UniqueName: \"kubernetes.io/projected/b4bddf8d-6696-491b-9d79-e057d2d18c14-kube-api-access-4jjjm\") pod \"neutron-operator-controller-manager-776c5696bf-zds26\" (UID: \"b4bddf8d-6696-491b-9d79-e057d2d18c14\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.547947 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvl9\" (UniqueName: \"kubernetes.io/projected/9f600d22-c94c-4d46-b024-d4674aca2d8d-kube-api-access-4vvl9\") pod \"nova-operator-controller-manager-569cc54c5-qvqhp\" (UID: \"9f600d22-c94c-4d46-b024-d4674aca2d8d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.548014 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwb59\" (UniqueName: \"kubernetes.io/projected/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-kube-api-access-vwb59\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.548048 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjb9\" (UniqueName: \"kubernetes.io/projected/8b85720c-9adc-46f0-835b-3a1709df2126-kube-api-access-zgjb9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wvkvr\" (UID: \"8b85720c-9adc-46f0-835b-3a1709df2126\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.551277 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.566817 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.568400 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.569629 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.572526 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dcls4" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.574011 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjb9\" (UniqueName: \"kubernetes.io/projected/8b85720c-9adc-46f0-835b-3a1709df2126-kube-api-access-zgjb9\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wvkvr\" (UID: \"8b85720c-9adc-46f0-835b-3a1709df2126\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.574537 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl47n\" (UniqueName: \"kubernetes.io/projected/082164ca-006f-4217-a664-a81f24fb7f9c-kube-api-access-wl47n\") pod \"keystone-operator-controller-manager-684f77d66d-mf7c8\" (UID: \"082164ca-006f-4217-a664-a81f24fb7f9c\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.576061 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv6lz\" (UniqueName: \"kubernetes.io/projected/ee7bce9d-5057-4eb7-af07-af66a5bc7473-kube-api-access-qv6lz\") pod \"manila-operator-controller-manager-68f45f9d9f-864bh\" (UID: \"ee7bce9d-5057-4eb7-af07-af66a5bc7473\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.579900 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.629305 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.653190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.656897 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.656936 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbps\" (UniqueName: \"kubernetes.io/projected/e05b8481-3bf6-416e-a404-61ce227c2350-kube-api-access-fzbps\") pod \"mariadb-operator-controller-manager-658d4cdd5-qxj54\" (UID: \"e05b8481-3bf6-416e-a404-61ce227c2350\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.656969 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln849\" (UniqueName: \"kubernetes.io/projected/82bb6884-6681-4382-b784-feadaf2a891f-kube-api-access-ln849\") pod \"swift-operator-controller-manager-677c674df7-dgkn6\" (UID: \"82bb6884-6681-4382-b784-feadaf2a891f\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.656986 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jjjm\" (UniqueName: \"kubernetes.io/projected/b4bddf8d-6696-491b-9d79-e057d2d18c14-kube-api-access-4jjjm\") pod \"neutron-operator-controller-manager-776c5696bf-zds26\" (UID: \"b4bddf8d-6696-491b-9d79-e057d2d18c14\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.657005 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vvl9\" (UniqueName: \"kubernetes.io/projected/9f600d22-c94c-4d46-b024-d4674aca2d8d-kube-api-access-4vvl9\") pod \"nova-operator-controller-manager-569cc54c5-qvqhp\" (UID: \"9f600d22-c94c-4d46-b024-d4674aca2d8d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.657042 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwb59\" (UniqueName: \"kubernetes.io/projected/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-kube-api-access-vwb59\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.657061 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbsrx\" (UniqueName: \"kubernetes.io/projected/b2313526-2f50-43bc-b6cd-45da343f91ae-kube-api-access-mbsrx\") pod \"ovn-operator-controller-manager-bbc5b68f9-82tsp\" (UID: \"b2313526-2f50-43bc-b6cd-45da343f91ae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.657082 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8xt\" (UniqueName: \"kubernetes.io/projected/425d8351-8597-4696-99f6-8ce744082d84-kube-api-access-9q8xt\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5fzrs\" (UID: \"425d8351-8597-4696-99f6-8ce744082d84\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.657105 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxb4\" (UniqueName: \"kubernetes.io/projected/a55e1a2b-1e10-41ee-b376-c879ec07d2af-kube-api-access-2xxb4\") pod \"placement-operator-controller-manager-574d45c66c-sh4lp\" (UID: \"a55e1a2b-1e10-41ee-b376-c879ec07d2af\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.657210 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.657244 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert podName:1f2fa0f6-38b3-4764-ab09-dc4064f986fe nodeName:}" failed. No retries permitted until 2026-03-10 19:07:16.157230982 +0000 UTC m=+1179.920666942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" (UID: "1f2fa0f6-38b3-4764-ab09-dc4064f986fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.690921 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.691766 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.697092 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jqfn7" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.699426 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vvl9\" (UniqueName: \"kubernetes.io/projected/9f600d22-c94c-4d46-b024-d4674aca2d8d-kube-api-access-4vvl9\") pod \"nova-operator-controller-manager-569cc54c5-qvqhp\" (UID: \"9f600d22-c94c-4d46-b024-d4674aca2d8d\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.703304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwb59\" (UniqueName: \"kubernetes.io/projected/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-kube-api-access-vwb59\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.706296 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbps\" (UniqueName: \"kubernetes.io/projected/e05b8481-3bf6-416e-a404-61ce227c2350-kube-api-access-fzbps\") pod \"mariadb-operator-controller-manager-658d4cdd5-qxj54\" (UID: \"e05b8481-3bf6-416e-a404-61ce227c2350\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.706400 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.712371 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jjjm\" (UniqueName: \"kubernetes.io/projected/b4bddf8d-6696-491b-9d79-e057d2d18c14-kube-api-access-4jjjm\") pod \"neutron-operator-controller-manager-776c5696bf-zds26\" (UID: \"b4bddf8d-6696-491b-9d79-e057d2d18c14\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.732050 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.778926 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln849\" (UniqueName: \"kubernetes.io/projected/82bb6884-6681-4382-b784-feadaf2a891f-kube-api-access-ln849\") pod \"swift-operator-controller-manager-677c674df7-dgkn6\" (UID: \"82bb6884-6681-4382-b784-feadaf2a891f\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.779898 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.780015 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbsrx\" (UniqueName: \"kubernetes.io/projected/b2313526-2f50-43bc-b6cd-45da343f91ae-kube-api-access-mbsrx\") pod \"ovn-operator-controller-manager-bbc5b68f9-82tsp\" (UID: \"b2313526-2f50-43bc-b6cd-45da343f91ae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.780321 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8xt\" (UniqueName: \"kubernetes.io/projected/425d8351-8597-4696-99f6-8ce744082d84-kube-api-access-9q8xt\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5fzrs\" (UID: \"425d8351-8597-4696-99f6-8ce744082d84\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.780355 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxb4\" (UniqueName: \"kubernetes.io/projected/a55e1a2b-1e10-41ee-b376-c879ec07d2af-kube-api-access-2xxb4\") pod \"placement-operator-controller-manager-574d45c66c-sh4lp\" (UID: \"a55e1a2b-1e10-41ee-b376-c879ec07d2af\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.800974 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.805812 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8xt\" (UniqueName: \"kubernetes.io/projected/425d8351-8597-4696-99f6-8ce744082d84-kube-api-access-9q8xt\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5fzrs\" (UID: \"425d8351-8597-4696-99f6-8ce744082d84\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.806184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.808632 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln849\" (UniqueName: \"kubernetes.io/projected/82bb6884-6681-4382-b784-feadaf2a891f-kube-api-access-ln849\") pod \"swift-operator-controller-manager-677c674df7-dgkn6\" (UID: \"82bb6884-6681-4382-b784-feadaf2a891f\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.814545 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.815515 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.826435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxb4\" (UniqueName: \"kubernetes.io/projected/a55e1a2b-1e10-41ee-b376-c879ec07d2af-kube-api-access-2xxb4\") pod \"placement-operator-controller-manager-574d45c66c-sh4lp\" (UID: \"a55e1a2b-1e10-41ee-b376-c879ec07d2af\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.826541 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cfjcj" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.853301 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbsrx\" (UniqueName: \"kubernetes.io/projected/b2313526-2f50-43bc-b6cd-45da343f91ae-kube-api-access-mbsrx\") pod \"ovn-operator-controller-manager-bbc5b68f9-82tsp\" (UID: \"b2313526-2f50-43bc-b6cd-45da343f91ae\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.856830 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.857140 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.880910 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6r4r\" (UniqueName: \"kubernetes.io/projected/aa17b421-616d-4842-aefd-f21a47952272-kube-api-access-f6r4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-kttwb\" (UID: \"aa17b421-616d-4842-aefd-f21a47952272\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.880974 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvqr\" (UniqueName: \"kubernetes.io/projected/3e4cfd72-8809-4af1-90da-66587816e46e-kube-api-access-mqvqr\") pod \"watcher-operator-controller-manager-6dd88c6f67-72px6\" (UID: \"3e4cfd72-8809-4af1-90da-66587816e46e\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.886726 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.904415 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.905243 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.915089 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.926131 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.926321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wtntn" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.926726 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.976303 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981563 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvqr\" (UniqueName: \"kubernetes.io/projected/3e4cfd72-8809-4af1-90da-66587816e46e-kube-api-access-mqvqr\") pod \"watcher-operator-controller-manager-6dd88c6f67-72px6\" (UID: \"3e4cfd72-8809-4af1-90da-66587816e46e\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981594 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981698 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981738 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6r4r\" (UniqueName: \"kubernetes.io/projected/aa17b421-616d-4842-aefd-f21a47952272-kube-api-access-f6r4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-kttwb\" (UID: \"aa17b421-616d-4842-aefd-f21a47952272\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.981787 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczlq\" (UniqueName: \"kubernetes.io/projected/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-kube-api-access-bczlq\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.983592 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: E0310 19:07:15.983633 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:16.983620658 +0000 UTC m=+1180.747056618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.987162 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.988096 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.994238 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll"] Mar 10 19:07:15 crc kubenswrapper[4861]: I0310 19:07:15.999792 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pxvf8" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.013762 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6r4r\" (UniqueName: \"kubernetes.io/projected/aa17b421-616d-4842-aefd-f21a47952272-kube-api-access-f6r4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-kttwb\" (UID: \"aa17b421-616d-4842-aefd-f21a47952272\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.034912 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvqr\" (UniqueName: \"kubernetes.io/projected/3e4cfd72-8809-4af1-90da-66587816e46e-kube-api-access-mqvqr\") pod \"watcher-operator-controller-manager-6dd88c6f67-72px6\" (UID: \"3e4cfd72-8809-4af1-90da-66587816e46e\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.048129 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.086209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bczlq\" (UniqueName: \"kubernetes.io/projected/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-kube-api-access-bczlq\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.086482 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.086517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.086676 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.086743 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:16.586728283 +0000 UTC m=+1180.350164233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.087465 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.087494 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:16.587485954 +0000 UTC m=+1180.350921914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "metrics-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.115552 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bczlq\" (UniqueName: \"kubernetes.io/projected/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-kube-api-access-bczlq\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.118239 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.130340 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.167625 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.180409 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.188091 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jcd\" (UniqueName: \"kubernetes.io/projected/19b27de0-6f4a-4017-aec7-eea80076189c-kube-api-access-n4jcd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9bfll\" (UID: \"19b27de0-6f4a-4017-aec7-eea80076189c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.188166 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.188312 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.188361 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert podName:1f2fa0f6-38b3-4764-ab09-dc4064f986fe nodeName:}" failed. No retries permitted until 2026-03-10 19:07:17.188348125 +0000 UTC m=+1180.951784085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" (UID: "1f2fa0f6-38b3-4764-ab09-dc4064f986fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.291106 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jcd\" (UniqueName: \"kubernetes.io/projected/19b27de0-6f4a-4017-aec7-eea80076189c-kube-api-access-n4jcd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9bfll\" (UID: \"19b27de0-6f4a-4017-aec7-eea80076189c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.309322 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.325083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jcd\" (UniqueName: \"kubernetes.io/projected/19b27de0-6f4a-4017-aec7-eea80076189c-kube-api-access-n4jcd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9bfll\" (UID: \"19b27de0-6f4a-4017-aec7-eea80076189c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.372668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" event={"ID":"239d1170-1e62-44e7-a07d-10ca9adfa28e","Type":"ContainerStarted","Data":"f2b6f42ca0b48dbf6a23d29fc8ac97d49c904359c8b00146642da50a1ab50553"} Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.385991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" event={"ID":"54044555-52fe-44c6-9e47-7f2748a8b114","Type":"ContainerStarted","Data":"9997377528f1e343c86d9a94ae67184ce34f638edb781af30ee391742ff9dd37"} Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.595064 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.595329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.595231 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.595405 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:17.595386252 +0000 UTC m=+1181.358822202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "metrics-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.595524 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.595609 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:17.595590657 +0000 UTC m=+1181.359026617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "webhook-server-cert" not found Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.610487 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.660417 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.671405 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.675152 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.700470 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.710480 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.715550 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.721250 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.793877 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54"] Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.809975 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode05b8481_3bf6_416e_a404_61ce227c2350.slice/crio-b3ce02e4f69f3cd766f928a4a567c7610d101ca479fd9fd3c4fc9c7bd98f327a WatchSource:0}: Error finding container b3ce02e4f69f3cd766f928a4a567c7610d101ca479fd9fd3c4fc9c7bd98f327a: Status 404 returned error can't find the container with id b3ce02e4f69f3cd766f928a4a567c7610d101ca479fd9fd3c4fc9c7bd98f327a Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.816273 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.821344 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26"] Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.826052 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa17b421_616d_4842_aefd_f21a47952272.slice/crio-951204b2477a911416dfd6193cc0ae876b1e900cf109f23a06d5176d74c103c8 WatchSource:0}: Error finding container 951204b2477a911416dfd6193cc0ae876b1e900cf109f23a06d5176d74c103c8: Status 404 returned error can't find the container with id 951204b2477a911416dfd6193cc0ae876b1e900cf109f23a06d5176d74c103c8 Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.856603 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6"] Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.859670 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55e1a2b_1e10_41ee_b376_c879ec07d2af.slice/crio-b875bf3783a6367b91eb46e5d8ce29d0825f8097c06dd9251d61ac9bc4cf912e WatchSource:0}: Error finding container b875bf3783a6367b91eb46e5d8ce29d0825f8097c06dd9251d61ac9bc4cf912e: Status 404 returned error can't find the container with id b875bf3783a6367b91eb46e5d8ce29d0825f8097c06dd9251d61ac9bc4cf912e Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.863612 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xxb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-sh4lp_openstack-operators(a55e1a2b-1e10-41ee-b376-c879ec07d2af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.865358 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" podUID="a55e1a2b-1e10-41ee-b376-c879ec07d2af" Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.865774 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod425d8351_8597_4696_99f6_8ce744082d84.slice/crio-070c1d9d1c009fc01c7c7f82a490a65981c66c70ce28663cff6b390c405c3fc6 WatchSource:0}: Error finding container 070c1d9d1c009fc01c7c7f82a490a65981c66c70ce28663cff6b390c405c3fc6: Status 404 returned error can't find the container with id 070c1d9d1c009fc01c7c7f82a490a65981c66c70ce28663cff6b390c405c3fc6 Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.871538 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82bb6884_6681_4382_b784_feadaf2a891f.slice/crio-1a93674f1501a90bb5adae4ed37e588b1e7abbb6014d21c64a8db1ad368e20f1 WatchSource:0}: Error finding container 1a93674f1501a90bb5adae4ed37e588b1e7abbb6014d21c64a8db1ad368e20f1: Status 404 returned error can't find the container with id 1a93674f1501a90bb5adae4ed37e588b1e7abbb6014d21c64a8db1ad368e20f1 Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.871888 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9q8xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cd66dbd4b-5fzrs_openstack-operators(425d8351-8597-4696-99f6-8ce744082d84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.873135 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" podUID="425d8351-8597-4696-99f6-8ce744082d84" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.873702 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ln849,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-677c674df7-dgkn6_openstack-operators(82bb6884-6681-4382-b784-feadaf2a891f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.874942 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" podUID="82bb6884-6681-4382-b784-feadaf2a891f" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.876657 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb"] Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.877107 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f600d22_c94c_4d46_b024_d4674aca2d8d.slice/crio-ea289db1afbe264bc854c3f86d1e4ae012fe980449af204f4040f436e99c1f16 WatchSource:0}: Error finding container ea289db1afbe264bc854c3f86d1e4ae012fe980449af204f4040f436e99c1f16: Status 404 returned error can't find the container with id ea289db1afbe264bc854c3f86d1e4ae012fe980449af204f4040f436e99c1f16 Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.881876 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vvl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-qvqhp_openstack-operators(9f600d22-c94c-4d46-b024-d4674aca2d8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.883114 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" podUID="9f600d22-c94c-4d46-b024-d4674aca2d8d" Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.892171 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.899130 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.977773 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp"] Mar 10 19:07:16 crc kubenswrapper[4861]: I0310 19:07:16.985639 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6"] Mar 10 19:07:16 crc kubenswrapper[4861]: W0310 19:07:16.986873 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e4cfd72_8809_4af1_90da_66587816e46e.slice/crio-c25da2dd99d1d4b0aa9f375bb8e3177656731b0bf1bd802934387f3ca323eadf WatchSource:0}: Error finding container c25da2dd99d1d4b0aa9f375bb8e3177656731b0bf1bd802934387f3ca323eadf: Status 404 returned error can't find the container with id c25da2dd99d1d4b0aa9f375bb8e3177656731b0bf1bd802934387f3ca323eadf Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.994448 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbsrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-82tsp_openstack-operators(b2313526-2f50-43bc-b6cd-45da343f91ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 10 19:07:16 crc kubenswrapper[4861]: E0310 19:07:16.995942 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" podUID="b2313526-2f50-43bc-b6cd-45da343f91ae" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.000853 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.001067 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.001115 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:19.001102229 +0000 UTC m=+1182.764538189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: W0310 19:07:17.171878 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b27de0_6f4a_4017_aec7_eea80076189c.slice/crio-98fe2a43fe610a626b521ed92dc8fb35923b7d0ff7c0778f03564360c94450e0 WatchSource:0}: Error finding container 98fe2a43fe610a626b521ed92dc8fb35923b7d0ff7c0778f03564360c94450e0: Status 404 returned error can't find the container with id 98fe2a43fe610a626b521ed92dc8fb35923b7d0ff7c0778f03564360c94450e0 Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.180888 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll"] Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.204639 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.204807 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.204867 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert podName:1f2fa0f6-38b3-4764-ab09-dc4064f986fe nodeName:}" failed. No retries permitted until 2026-03-10 19:07:19.204850139 +0000 UTC m=+1182.968286099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" (UID: "1f2fa0f6-38b3-4764-ab09-dc4064f986fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.400934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" event={"ID":"94ef6e9b-1270-4992-9d58-902e82b52294","Type":"ContainerStarted","Data":"742719c9f481103c5f313df67e70b4e26907b38a7c264efe2a2d06c403b6c2d0"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.402999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" event={"ID":"425d8351-8597-4696-99f6-8ce744082d84","Type":"ContainerStarted","Data":"070c1d9d1c009fc01c7c7f82a490a65981c66c70ce28663cff6b390c405c3fc6"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.405059 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" event={"ID":"82bb6884-6681-4382-b784-feadaf2a891f","Type":"ContainerStarted","Data":"1a93674f1501a90bb5adae4ed37e588b1e7abbb6014d21c64a8db1ad368e20f1"} Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.407474 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" podUID="82bb6884-6681-4382-b784-feadaf2a891f" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.407776 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" event={"ID":"a55e1a2b-1e10-41ee-b376-c879ec07d2af","Type":"ContainerStarted","Data":"b875bf3783a6367b91eb46e5d8ce29d0825f8097c06dd9251d61ac9bc4cf912e"} Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.407827 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" podUID="425d8351-8597-4696-99f6-8ce744082d84" Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.409405 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" podUID="a55e1a2b-1e10-41ee-b376-c879ec07d2af" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.410233 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" event={"ID":"9f600d22-c94c-4d46-b024-d4674aca2d8d","Type":"ContainerStarted","Data":"ea289db1afbe264bc854c3f86d1e4ae012fe980449af204f4040f436e99c1f16"} Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.411312 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" podUID="9f600d22-c94c-4d46-b024-d4674aca2d8d" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.419648 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" event={"ID":"3e4cfd72-8809-4af1-90da-66587816e46e","Type":"ContainerStarted","Data":"c25da2dd99d1d4b0aa9f375bb8e3177656731b0bf1bd802934387f3ca323eadf"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.424240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" event={"ID":"16f833b1-3203-4532-a5f4-bc8769cd0932","Type":"ContainerStarted","Data":"270824ccde298460c611d138a4e875e36a433bacec19216e879746b994a39218"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.426558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" event={"ID":"b4bddf8d-6696-491b-9d79-e057d2d18c14","Type":"ContainerStarted","Data":"7d33f917bdbb52bf5a3f1c1fea61282f825c37ef4009e4a2cb607f5ee7c4dbbd"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.432111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" event={"ID":"b2313526-2f50-43bc-b6cd-45da343f91ae","Type":"ContainerStarted","Data":"a395f5dbf72f98b28007a936adc705175608913091715145085da948690fa481"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.433475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" event={"ID":"8b85720c-9adc-46f0-835b-3a1709df2126","Type":"ContainerStarted","Data":"f4caf1bdfe741c4976fcd6eb908cbaf380f5ed1712b3604fe7ccbfb56390ba9a"} Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.433524 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" podUID="b2313526-2f50-43bc-b6cd-45da343f91ae" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.435363 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" event={"ID":"e05b8481-3bf6-416e-a404-61ce227c2350","Type":"ContainerStarted","Data":"b3ce02e4f69f3cd766f928a4a567c7610d101ca479fd9fd3c4fc9c7bd98f327a"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.438268 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" event={"ID":"1596b973-4b23-48a6-9924-8e98b7535a61","Type":"ContainerStarted","Data":"d6084be73ca5d3a65435710b4916e400362599e55baf6fc6e82908eb9f3bc1ab"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.441013 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" event={"ID":"082164ca-006f-4217-a664-a81f24fb7f9c","Type":"ContainerStarted","Data":"138f6dd028c109bf1d9839210909d263007c97c25c92afdc73eed376207a38e1"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.443896 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" event={"ID":"ce725c26-b020-402f-ad0c-ae44f307e21e","Type":"ContainerStarted","Data":"1825395cc7384cd36dc99bdfda5b10c83d8818dc3a9c82292f20c5cf362437e4"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.446971 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" event={"ID":"19b27de0-6f4a-4017-aec7-eea80076189c","Type":"ContainerStarted","Data":"98fe2a43fe610a626b521ed92dc8fb35923b7d0ff7c0778f03564360c94450e0"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.459485 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" event={"ID":"56a6874c-f929-4490-a247-52542d4aa8f1","Type":"ContainerStarted","Data":"9edfdd2c05910dcab04d91177e77e57fa0ded3812b4f82d1e386984ba33b784e"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.466208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" event={"ID":"aa17b421-616d-4842-aefd-f21a47952272","Type":"ContainerStarted","Data":"951204b2477a911416dfd6193cc0ae876b1e900cf109f23a06d5176d74c103c8"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.469360 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" event={"ID":"ee7bce9d-5057-4eb7-af07-af66a5bc7473","Type":"ContainerStarted","Data":"c7e24fbf26108372f6b327f62ccd0a7ecc56a8a9849eeb54e0560781da8c3658"} Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.610330 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:17 crc kubenswrapper[4861]: I0310 19:07:17.610588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.610486 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.610730 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.610738 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:19.610694171 +0000 UTC m=+1183.374130131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "metrics-server-cert" not found Mar 10 19:07:17 crc kubenswrapper[4861]: E0310 19:07:17.610778 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:19.610763093 +0000 UTC m=+1183.374199053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "webhook-server-cert" not found Mar 10 19:07:18 crc kubenswrapper[4861]: E0310 19:07:18.478314 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" podUID="425d8351-8597-4696-99f6-8ce744082d84" Mar 10 19:07:18 crc kubenswrapper[4861]: E0310 19:07:18.479211 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" podUID="a55e1a2b-1e10-41ee-b376-c879ec07d2af" Mar 10 19:07:18 crc kubenswrapper[4861]: E0310 19:07:18.479285 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" podUID="9f600d22-c94c-4d46-b024-d4674aca2d8d" Mar 10 19:07:18 crc kubenswrapper[4861]: E0310 19:07:18.479985 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" podUID="82bb6884-6681-4382-b784-feadaf2a891f" Mar 10 19:07:18 crc kubenswrapper[4861]: E0310 19:07:18.480149 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" podUID="b2313526-2f50-43bc-b6cd-45da343f91ae" Mar 10 19:07:19 crc kubenswrapper[4861]: I0310 19:07:19.032669 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.032865 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.032969 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:23.032943215 +0000 UTC m=+1186.796379205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: I0310 19:07:19.235836 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.236063 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.236122 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert podName:1f2fa0f6-38b3-4764-ab09-dc4064f986fe nodeName:}" failed. No retries permitted until 2026-03-10 19:07:23.236106597 +0000 UTC m=+1186.999542577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" (UID: "1f2fa0f6-38b3-4764-ab09-dc4064f986fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: I0310 19:07:19.644476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:19 crc kubenswrapper[4861]: I0310 19:07:19.644552 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.644661 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.644755 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:23.644735768 +0000 UTC m=+1187.408171738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "metrics-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.644752 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 10 19:07:19 crc kubenswrapper[4861]: E0310 19:07:19.644855 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:23.644836501 +0000 UTC m=+1187.408272461 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: I0310 19:07:23.102612 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.102838 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.103008 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:31.102994026 +0000 UTC m=+1194.866429986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: I0310 19:07:23.305542 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.305949 4861 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.306006 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert podName:1f2fa0f6-38b3-4764-ab09-dc4064f986fe nodeName:}" failed. No retries permitted until 2026-03-10 19:07:31.305993023 +0000 UTC m=+1195.069428983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" (UID: "1f2fa0f6-38b3-4764-ab09-dc4064f986fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: I0310 19:07:23.711840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:23 crc kubenswrapper[4861]: I0310 19:07:23.711890 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.712034 4861 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.712054 4861 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.712121 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:31.712103123 +0000 UTC m=+1195.475539083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "webhook-server-cert" not found Mar 10 19:07:23 crc kubenswrapper[4861]: E0310 19:07:23.712140 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs podName:33e7a010-7cd8-4eef-b19f-30ea72eb0a03 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:31.712131024 +0000 UTC m=+1195.475566984 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-5sqmh" (UID: "33e7a010-7cd8-4eef-b19f-30ea72eb0a03") : secret "metrics-server-cert" not found Mar 10 19:07:29 crc kubenswrapper[4861]: E0310 19:07:29.603070 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6" Mar 10 19:07:29 crc kubenswrapper[4861]: E0310 19:07:29.604003 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tsbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-77b6666d85-q76xg_openstack-operators(56a6874c-f929-4490-a247-52542d4aa8f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:29 crc kubenswrapper[4861]: E0310 19:07:29.605197 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" podUID="56a6874c-f929-4490-a247-52542d4aa8f1" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.230469 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.230902 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jjjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-zds26_openstack-operators(b4bddf8d-6696-491b-9d79-e057d2d18c14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.232186 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" podUID="b4bddf8d-6696-491b-9d79-e057d2d18c14" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.580794 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" podUID="b4bddf8d-6696-491b-9d79-e057d2d18c14" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.582354 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6\\\"\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" podUID="56a6874c-f929-4490-a247-52542d4aa8f1" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.840737 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.841084 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksgc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6bbb499bbc-bdscg_openstack-operators(16f833b1-3203-4532-a5f4-bc8769cd0932): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:30 crc kubenswrapper[4861]: E0310 19:07:30.842892 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" podUID="16f833b1-3203-4532-a5f4-bc8769cd0932" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.150566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:31 crc kubenswrapper[4861]: E0310 19:07:31.150776 4861 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:31 crc kubenswrapper[4861]: E0310 19:07:31.150843 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert podName:acdc9485-305f-401f-91bb-749a9e1e3c89 nodeName:}" failed. No retries permitted until 2026-03-10 19:07:47.150823976 +0000 UTC m=+1210.914259946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert") pod "infra-operator-controller-manager-5995f4446f-z66mq" (UID: "acdc9485-305f-401f-91bb-749a9e1e3c89") : secret "infra-operator-webhook-server-cert" not found Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.355167 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.363024 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f2fa0f6-38b3-4764-ab09-dc4064f986fe-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fkj9zr\" (UID: \"1f2fa0f6-38b3-4764-ab09-dc4064f986fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.405483 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:31 crc kubenswrapper[4861]: E0310 19:07:31.588869 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" podUID="16f833b1-3203-4532-a5f4-bc8769cd0932" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.760947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.760991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.766284 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.776481 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/33e7a010-7cd8-4eef-b19f-30ea72eb0a03-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-5sqmh\" (UID: \"33e7a010-7cd8-4eef-b19f-30ea72eb0a03\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:31 crc kubenswrapper[4861]: I0310 19:07:31.880690 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:32 crc kubenswrapper[4861]: E0310 19:07:32.124963 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4" Mar 10 19:07:32 crc kubenswrapper[4861]: E0310 19:07:32.125128 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qv6lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-68f45f9d9f-864bh_openstack-operators(ee7bce9d-5057-4eb7-af07-af66a5bc7473): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:32 crc kubenswrapper[4861]: E0310 19:07:32.126305 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" podUID="ee7bce9d-5057-4eb7-af07-af66a5bc7473" Mar 10 19:07:32 crc kubenswrapper[4861]: E0310 19:07:32.594214 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4\\\"\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" podUID="ee7bce9d-5057-4eb7-af07-af66a5bc7473" Mar 10 19:07:34 crc kubenswrapper[4861]: E0310 19:07:34.159791 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 10 19:07:34 crc kubenswrapper[4861]: E0310 19:07:34.160203 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wl47n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-mf7c8_openstack-operators(082164ca-006f-4217-a664-a81f24fb7f9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:34 crc kubenswrapper[4861]: E0310 19:07:34.161428 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" podUID="082164ca-006f-4217-a664-a81f24fb7f9c" Mar 10 19:07:34 crc kubenswrapper[4861]: E0310 19:07:34.608046 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" podUID="082164ca-006f-4217-a664-a81f24fb7f9c" Mar 10 19:07:35 crc kubenswrapper[4861]: E0310 19:07:35.417269 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 10 19:07:35 crc kubenswrapper[4861]: E0310 19:07:35.417381 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4jcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9bfll_openstack-operators(19b27de0-6f4a-4017-aec7-eea80076189c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:07:35 crc kubenswrapper[4861]: E0310 19:07:35.418516 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" podUID="19b27de0-6f4a-4017-aec7-eea80076189c" Mar 10 19:07:35 crc kubenswrapper[4861]: E0310 19:07:35.616152 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" podUID="19b27de0-6f4a-4017-aec7-eea80076189c" Mar 10 19:07:38 crc kubenswrapper[4861]: I0310 19:07:38.789873 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr"] Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.058015 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh"] Mar 10 19:07:39 crc kubenswrapper[4861]: W0310 19:07:39.074952 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e7a010_7cd8_4eef_b19f_30ea72eb0a03.slice/crio-b86fa729154a5e06512933d8d3d2328352b9331ce2ab6560ed6559f0b98c0f39 WatchSource:0}: Error finding container b86fa729154a5e06512933d8d3d2328352b9331ce2ab6560ed6559f0b98c0f39: Status 404 returned error can't find the container with id b86fa729154a5e06512933d8d3d2328352b9331ce2ab6560ed6559f0b98c0f39 Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.655427 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" event={"ID":"aa17b421-616d-4842-aefd-f21a47952272","Type":"ContainerStarted","Data":"07f5fb1d4bece7273359365186d514e61d6b5b1488366425219e77d71510fb0f"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.656428 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.657814 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" event={"ID":"425d8351-8597-4696-99f6-8ce744082d84","Type":"ContainerStarted","Data":"878081f8003874024091258436e80ae30ccc82032d0b28ca7281465e30f81ea2"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.658125 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.660877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" event={"ID":"82bb6884-6681-4382-b784-feadaf2a891f","Type":"ContainerStarted","Data":"10dfcb0d8c9c0d88bbcfc47a6f361240065d1c83192361b34e9a0b1623f0ad3a"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.661240 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.663409 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" event={"ID":"8b85720c-9adc-46f0-835b-3a1709df2126","Type":"ContainerStarted","Data":"96ecc3278704d2570d3b4db0624cff35ec5cfd71a81c791cc4d7aab202d531ac"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.663753 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.668994 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" event={"ID":"1596b973-4b23-48a6-9924-8e98b7535a61","Type":"ContainerStarted","Data":"d52c2fb3c6212082f45981a104148a3585b582537d1405dbe1ed1211153bf7bb"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.669364 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.672970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" event={"ID":"94ef6e9b-1270-4992-9d58-902e82b52294","Type":"ContainerStarted","Data":"15002ecb690ae0919d876722c16e60ea7038d92bda281d26e6c6774cf8644c2c"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.673313 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.675231 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" event={"ID":"3e4cfd72-8809-4af1-90da-66587816e46e","Type":"ContainerStarted","Data":"17177934dfe8b534a7da49f3e0e3d7bd18f5d83801cd26b046ae798056329980"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.675573 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.678322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" event={"ID":"b2313526-2f50-43bc-b6cd-45da343f91ae","Type":"ContainerStarted","Data":"eeafcbf7147bbed89b9c12254855e2c9dd27e6742e8549ea7e1991c30d93dca4"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.678635 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.684572 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" event={"ID":"e05b8481-3bf6-416e-a404-61ce227c2350","Type":"ContainerStarted","Data":"85208d38508da43b54c0dd07f020cb2d49d60b5c374bfc4ea0405ec0f8ef41bf"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.685071 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.688258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" event={"ID":"54044555-52fe-44c6-9e47-7f2748a8b114","Type":"ContainerStarted","Data":"143ee5614d9d02312683d1f96904dc44c6735b68fdb01bb8b688a399b52d0edf"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.688574 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.690759 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" event={"ID":"ce725c26-b020-402f-ad0c-ae44f307e21e","Type":"ContainerStarted","Data":"6f219602311261967fa743310784e5d96d2cb9bd1b5e203e6e92852440f52a68"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.691069 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.692262 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" event={"ID":"239d1170-1e62-44e7-a07d-10ca9adfa28e","Type":"ContainerStarted","Data":"550fab47260ff04141fcaa5049a01d799cd79f6fe0939dacc2c44246d46166a9"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.692580 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.693393 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" event={"ID":"1f2fa0f6-38b3-4764-ab09-dc4064f986fe","Type":"ContainerStarted","Data":"0839fe38c82202bd70e6f9e78ae0983027c34a125ea1d4905100298630c8da3e"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.695802 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" podStartSLOduration=6.112718314 podStartE2EDuration="24.695793912s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.829520709 +0000 UTC m=+1180.592956669" lastFinishedPulling="2026-03-10 19:07:35.412596307 +0000 UTC m=+1199.176032267" observedRunningTime="2026-03-10 19:07:39.689597436 +0000 UTC m=+1203.453033396" watchObservedRunningTime="2026-03-10 19:07:39.695793912 +0000 UTC m=+1203.459229872" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.711225 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" event={"ID":"a55e1a2b-1e10-41ee-b376-c879ec07d2af","Type":"ContainerStarted","Data":"eac60decb981f740a74f367b2d816f0aacb19af658c19767770ee9be268e26f4"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.711866 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.719276 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" event={"ID":"9f600d22-c94c-4d46-b024-d4674aca2d8d","Type":"ContainerStarted","Data":"879bfddc6d713cb8fcfa9e90c26fcf5f8f2ef2079e3dfcc88f0aac3db95a196b"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.719899 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.730018 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" event={"ID":"33e7a010-7cd8-4eef-b19f-30ea72eb0a03","Type":"ContainerStarted","Data":"83178733bc33aaed3504d0b0034fd7ce71cf0c8926fb31e7785094b1c07fed1e"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.730056 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" event={"ID":"33e7a010-7cd8-4eef-b19f-30ea72eb0a03","Type":"ContainerStarted","Data":"b86fa729154a5e06512933d8d3d2328352b9331ce2ab6560ed6559f0b98c0f39"} Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.730639 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.744599 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" podStartSLOduration=3.270594395 podStartE2EDuration="24.74458267s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.994297477 +0000 UTC m=+1180.757733437" lastFinishedPulling="2026-03-10 19:07:38.468285732 +0000 UTC m=+1202.231721712" observedRunningTime="2026-03-10 19:07:39.735673039 +0000 UTC m=+1203.499109009" watchObservedRunningTime="2026-03-10 19:07:39.74458267 +0000 UTC m=+1203.508018630" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.777935 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" podStartSLOduration=7.029371586 podStartE2EDuration="24.777922343s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.371024999 +0000 UTC m=+1180.134460949" lastFinishedPulling="2026-03-10 19:07:34.119575746 +0000 UTC m=+1197.883011706" observedRunningTime="2026-03-10 19:07:39.775248437 +0000 UTC m=+1203.538684407" watchObservedRunningTime="2026-03-10 19:07:39.777922343 +0000 UTC m=+1203.541358303" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.814691 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" podStartSLOduration=6.094630414 podStartE2EDuration="24.814674892s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.699864845 +0000 UTC m=+1180.463300795" lastFinishedPulling="2026-03-10 19:07:35.419909313 +0000 UTC m=+1199.183345273" observedRunningTime="2026-03-10 19:07:39.807261542 +0000 UTC m=+1203.570697512" watchObservedRunningTime="2026-03-10 19:07:39.814674892 +0000 UTC m=+1203.578110852" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.843225 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" podStartSLOduration=6.188698602 podStartE2EDuration="24.843205348s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.756818344 +0000 UTC m=+1180.520254304" lastFinishedPulling="2026-03-10 19:07:35.41132509 +0000 UTC m=+1199.174761050" observedRunningTime="2026-03-10 19:07:39.837588069 +0000 UTC m=+1203.601024039" watchObservedRunningTime="2026-03-10 19:07:39.843205348 +0000 UTC m=+1203.606641308" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.867803 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" podStartSLOduration=3.124375411 podStartE2EDuration="24.867726221s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.871778614 +0000 UTC m=+1180.635214574" lastFinishedPulling="2026-03-10 19:07:38.615129414 +0000 UTC m=+1202.378565384" observedRunningTime="2026-03-10 19:07:39.862252097 +0000 UTC m=+1203.625688067" watchObservedRunningTime="2026-03-10 19:07:39.867726221 +0000 UTC m=+1203.631162181" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.886284 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" podStartSLOduration=3.192148248 podStartE2EDuration="24.886265666s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.873606736 +0000 UTC m=+1180.637042696" lastFinishedPulling="2026-03-10 19:07:38.567724154 +0000 UTC m=+1202.331160114" observedRunningTime="2026-03-10 19:07:39.882298403 +0000 UTC m=+1203.645734373" watchObservedRunningTime="2026-03-10 19:07:39.886265666 +0000 UTC m=+1203.649701626" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.907872 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" podStartSLOduration=6.4841582540000005 podStartE2EDuration="24.907856546s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.989351597 +0000 UTC m=+1180.752787557" lastFinishedPulling="2026-03-10 19:07:35.413049889 +0000 UTC m=+1199.176485849" observedRunningTime="2026-03-10 19:07:39.902375611 +0000 UTC m=+1203.665811591" watchObservedRunningTime="2026-03-10 19:07:39.907856546 +0000 UTC m=+1203.671292506" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.937270 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" podStartSLOduration=6.242613997 podStartE2EDuration="24.937249467s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:15.985054639 +0000 UTC m=+1179.748490599" lastFinishedPulling="2026-03-10 19:07:34.679690109 +0000 UTC m=+1198.443126069" observedRunningTime="2026-03-10 19:07:39.935217109 +0000 UTC m=+1203.698653079" watchObservedRunningTime="2026-03-10 19:07:39.937249467 +0000 UTC m=+1203.700685427" Mar 10 19:07:39 crc kubenswrapper[4861]: I0310 19:07:39.961940 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" podStartSLOduration=6.353645455 podStartE2EDuration="24.961925445s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.812316033 +0000 UTC m=+1180.575751993" lastFinishedPulling="2026-03-10 19:07:35.420596033 +0000 UTC m=+1199.184031983" observedRunningTime="2026-03-10 19:07:39.9557844 +0000 UTC m=+1203.719220370" watchObservedRunningTime="2026-03-10 19:07:39.961925445 +0000 UTC m=+1203.725361405" Mar 10 19:07:40 crc kubenswrapper[4861]: I0310 19:07:40.011503 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" podStartSLOduration=3.315401301 podStartE2EDuration="25.011488545s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.863490879 +0000 UTC m=+1180.626926839" lastFinishedPulling="2026-03-10 19:07:38.559578123 +0000 UTC m=+1202.323014083" observedRunningTime="2026-03-10 19:07:40.009291064 +0000 UTC m=+1203.772727034" watchObservedRunningTime="2026-03-10 19:07:40.011488545 +0000 UTC m=+1203.774924505" Mar 10 19:07:40 crc kubenswrapper[4861]: I0310 19:07:40.097834 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" podStartSLOduration=5.909798319 podStartE2EDuration="25.097820316s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.225433274 +0000 UTC m=+1179.988869234" lastFinishedPulling="2026-03-10 19:07:35.413455261 +0000 UTC m=+1199.176891231" observedRunningTime="2026-03-10 19:07:40.056030744 +0000 UTC m=+1203.819466724" watchObservedRunningTime="2026-03-10 19:07:40.097820316 +0000 UTC m=+1203.861256266" Mar 10 19:07:40 crc kubenswrapper[4861]: I0310 19:07:40.099494 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" podStartSLOduration=25.099488143 podStartE2EDuration="25.099488143s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:07:40.095965563 +0000 UTC m=+1203.859401543" watchObservedRunningTime="2026-03-10 19:07:40.099488143 +0000 UTC m=+1203.862924103" Mar 10 19:07:40 crc kubenswrapper[4861]: I0310 19:07:40.146153 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" podStartSLOduration=3.487107245 podStartE2EDuration="25.146138831s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.881684804 +0000 UTC m=+1180.645120764" lastFinishedPulling="2026-03-10 19:07:38.54071639 +0000 UTC m=+1202.304152350" observedRunningTime="2026-03-10 19:07:40.143093945 +0000 UTC m=+1203.906529915" watchObservedRunningTime="2026-03-10 19:07:40.146138831 +0000 UTC m=+1203.909574791" Mar 10 19:07:40 crc kubenswrapper[4861]: I0310 19:07:40.164765 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" podStartSLOduration=6.507601147 podStartE2EDuration="25.164749738s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.754977313 +0000 UTC m=+1180.518413273" lastFinishedPulling="2026-03-10 19:07:35.412125904 +0000 UTC m=+1199.175561864" observedRunningTime="2026-03-10 19:07:40.162171195 +0000 UTC m=+1203.925607165" watchObservedRunningTime="2026-03-10 19:07:40.164749738 +0000 UTC m=+1203.928185698" Mar 10 19:07:41 crc kubenswrapper[4861]: I0310 19:07:41.776616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" event={"ID":"1f2fa0f6-38b3-4764-ab09-dc4064f986fe","Type":"ContainerStarted","Data":"be6efefaa5f421273ae6a78328378f8f76cf068dbf3edbd281f9ad4a69a3e99d"} Mar 10 19:07:41 crc kubenswrapper[4861]: I0310 19:07:41.828113 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" podStartSLOduration=23.999159259 podStartE2EDuration="26.828083676s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:38.799501516 +0000 UTC m=+1202.562937466" lastFinishedPulling="2026-03-10 19:07:41.628425913 +0000 UTC m=+1205.391861883" observedRunningTime="2026-03-10 19:07:41.814529403 +0000 UTC m=+1205.577965383" watchObservedRunningTime="2026-03-10 19:07:41.828083676 +0000 UTC m=+1205.591519676" Mar 10 19:07:42 crc kubenswrapper[4861]: I0310 19:07:42.785313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" event={"ID":"16f833b1-3203-4532-a5f4-bc8769cd0932","Type":"ContainerStarted","Data":"a99d602e3892263665f687a14ecdd6901ddb80fee7f1949afb347fbc176a3626"} Mar 10 19:07:42 crc kubenswrapper[4861]: I0310 19:07:42.785794 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:42 crc kubenswrapper[4861]: I0310 19:07:42.786002 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:42 crc kubenswrapper[4861]: I0310 19:07:42.827159 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" podStartSLOduration=2.055321192 podStartE2EDuration="27.827136726s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.728780792 +0000 UTC m=+1180.492216772" lastFinishedPulling="2026-03-10 19:07:42.500596316 +0000 UTC m=+1206.264032306" observedRunningTime="2026-03-10 19:07:42.823794232 +0000 UTC m=+1206.587230242" watchObservedRunningTime="2026-03-10 19:07:42.827136726 +0000 UTC m=+1206.590572726" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.400050 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.474763 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d7ph4" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.486181 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4mh8f" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.506025 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9bhqz" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.554974 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-265hs" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.784292 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-qxj54" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.805895 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wvkvr" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.809301 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-qvqhp" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.860647 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-dgkn6" Mar 10 19:07:45 crc kubenswrapper[4861]: I0310 19:07:45.890492 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5fzrs" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.051457 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-kttwb" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.121297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-sh4lp" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.133024 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-82tsp" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.191581 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-72px6" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.848668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" event={"ID":"b4bddf8d-6696-491b-9d79-e057d2d18c14","Type":"ContainerStarted","Data":"b000d1abd2d969fbc674bf59dd4dbe40a56b045ab2537037a6b185bf8d8d923d"} Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.849974 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.864845 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" event={"ID":"19b27de0-6f4a-4017-aec7-eea80076189c","Type":"ContainerStarted","Data":"59f6a90f3c2d449ab76edd369308a92ddd58ffc8378f98858d999a006510f350"} Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.870873 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" event={"ID":"082164ca-006f-4217-a664-a81f24fb7f9c","Type":"ContainerStarted","Data":"ce67466dc91e5732f1352222e20f850b34c1fe5604877faf93052669f976d53c"} Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.871479 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.874053 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" podStartSLOduration=2.846454105 podStartE2EDuration="31.874034782s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.823699885 +0000 UTC m=+1180.587135845" lastFinishedPulling="2026-03-10 19:07:45.851280552 +0000 UTC m=+1209.614716522" observedRunningTime="2026-03-10 19:07:46.864932675 +0000 UTC m=+1210.628368645" watchObservedRunningTime="2026-03-10 19:07:46.874034782 +0000 UTC m=+1210.637470742" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.879096 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" event={"ID":"56a6874c-f929-4490-a247-52542d4aa8f1","Type":"ContainerStarted","Data":"13fefe34a2ba218e6bef0b8e759c5d6005181add38ff4170ba23915850c698ee"} Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.879693 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.888748 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9bfll" podStartSLOduration=2.6574779939999997 podStartE2EDuration="31.888731707s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:17.175997724 +0000 UTC m=+1180.939433684" lastFinishedPulling="2026-03-10 19:07:46.407251427 +0000 UTC m=+1210.170687397" observedRunningTime="2026-03-10 19:07:46.886469163 +0000 UTC m=+1210.649905123" watchObservedRunningTime="2026-03-10 19:07:46.888731707 +0000 UTC m=+1210.652167667" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.936482 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" podStartSLOduration=2.969558005 podStartE2EDuration="31.936461076s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.719176571 +0000 UTC m=+1180.482612541" lastFinishedPulling="2026-03-10 19:07:45.686079642 +0000 UTC m=+1209.449515612" observedRunningTime="2026-03-10 19:07:46.909967218 +0000 UTC m=+1210.673403188" watchObservedRunningTime="2026-03-10 19:07:46.936461076 +0000 UTC m=+1210.699897036" Mar 10 19:07:46 crc kubenswrapper[4861]: I0310 19:07:46.938679 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" podStartSLOduration=2.212103964 podStartE2EDuration="31.938673819s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.732886068 +0000 UTC m=+1180.496322028" lastFinishedPulling="2026-03-10 19:07:46.459455913 +0000 UTC m=+1210.222891883" observedRunningTime="2026-03-10 19:07:46.932913436 +0000 UTC m=+1210.696349416" watchObservedRunningTime="2026-03-10 19:07:46.938673819 +0000 UTC m=+1210.702109779" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.204335 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.210693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/acdc9485-305f-401f-91bb-749a9e1e3c89-cert\") pod \"infra-operator-controller-manager-5995f4446f-z66mq\" (UID: \"acdc9485-305f-401f-91bb-749a9e1e3c89\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.339014 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dg2ls" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.348400 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.857917 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq"] Mar 10 19:07:47 crc kubenswrapper[4861]: W0310 19:07:47.859913 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacdc9485_305f_401f_91bb_749a9e1e3c89.slice/crio-e98b5ea41295d38847b0800677a36a26c61eaec84d37e3dacdde4ae09d9b823e WatchSource:0}: Error finding container e98b5ea41295d38847b0800677a36a26c61eaec84d37e3dacdde4ae09d9b823e: Status 404 returned error can't find the container with id e98b5ea41295d38847b0800677a36a26c61eaec84d37e3dacdde4ae09d9b823e Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.889217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" event={"ID":"ee7bce9d-5057-4eb7-af07-af66a5bc7473","Type":"ContainerStarted","Data":"d171f26bc0c1143eca459c9fc60b06c3f7eb94cbe8c1fef16fd85d07aca624cf"} Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.889489 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.890855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" event={"ID":"acdc9485-305f-401f-91bb-749a9e1e3c89","Type":"ContainerStarted","Data":"e98b5ea41295d38847b0800677a36a26c61eaec84d37e3dacdde4ae09d9b823e"} Mar 10 19:07:47 crc kubenswrapper[4861]: I0310 19:07:47.905766 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" podStartSLOduration=2.137144626 podStartE2EDuration="32.905748006s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:16.722951237 +0000 UTC m=+1180.486387197" lastFinishedPulling="2026-03-10 19:07:47.491554607 +0000 UTC m=+1211.254990577" observedRunningTime="2026-03-10 19:07:47.903169433 +0000 UTC m=+1211.666605403" watchObservedRunningTime="2026-03-10 19:07:47.905748006 +0000 UTC m=+1211.669183966" Mar 10 19:07:51 crc kubenswrapper[4861]: I0310 19:07:51.415661 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fkj9zr" Mar 10 19:07:51 crc kubenswrapper[4861]: I0310 19:07:51.889583 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-5sqmh" Mar 10 19:07:55 crc kubenswrapper[4861]: I0310 19:07:55.519995 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" Mar 10 19:07:55 crc kubenswrapper[4861]: I0310 19:07:55.576898 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-bdscg" Mar 10 19:07:55 crc kubenswrapper[4861]: I0310 19:07:55.632430 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mf7c8" Mar 10 19:07:55 crc kubenswrapper[4861]: I0310 19:07:55.659215 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-864bh" Mar 10 19:07:55 crc kubenswrapper[4861]: I0310 19:07:55.735867 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-zds26" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.158495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552828-8qbl9"] Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.161401 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.164891 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.165367 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.165765 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.174549 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552828-8qbl9"] Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.246063 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzs7s\" (UniqueName: \"kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s\") pod \"auto-csr-approver-29552828-8qbl9\" (UID: \"4dfe3138-3c9e-401e-8b34-7c0990829691\") " pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.347135 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzs7s\" (UniqueName: \"kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s\") pod \"auto-csr-approver-29552828-8qbl9\" (UID: \"4dfe3138-3c9e-401e-8b34-7c0990829691\") " pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.377591 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzs7s\" (UniqueName: \"kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s\") pod \"auto-csr-approver-29552828-8qbl9\" (UID: \"4dfe3138-3c9e-401e-8b34-7c0990829691\") " pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:00 crc kubenswrapper[4861]: I0310 19:08:00.570155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:01 crc kubenswrapper[4861]: W0310 19:08:01.065526 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dfe3138_3c9e_401e_8b34_7c0990829691.slice/crio-4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f WatchSource:0}: Error finding container 4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f: Status 404 returned error can't find the container with id 4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f Mar 10 19:08:01 crc kubenswrapper[4861]: I0310 19:08:01.067100 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552828-8qbl9"] Mar 10 19:08:02 crc kubenswrapper[4861]: I0310 19:08:02.005937 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" event={"ID":"acdc9485-305f-401f-91bb-749a9e1e3c89","Type":"ContainerStarted","Data":"771331475bd6d8105042ff4d4aa133da1dde4c2d9e9f43a9ffe80d967af685c4"} Mar 10 19:08:02 crc kubenswrapper[4861]: I0310 19:08:02.006320 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:08:02 crc kubenswrapper[4861]: I0310 19:08:02.007427 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" event={"ID":"4dfe3138-3c9e-401e-8b34-7c0990829691","Type":"ContainerStarted","Data":"4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f"} Mar 10 19:08:02 crc kubenswrapper[4861]: I0310 19:08:02.038964 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" podStartSLOduration=34.091455032 podStartE2EDuration="47.038928585s" podCreationTimestamp="2026-03-10 19:07:15 +0000 UTC" firstStartedPulling="2026-03-10 19:07:47.86237452 +0000 UTC m=+1211.625810480" lastFinishedPulling="2026-03-10 19:08:00.809848043 +0000 UTC m=+1224.573284033" observedRunningTime="2026-03-10 19:08:02.029759336 +0000 UTC m=+1225.793195356" watchObservedRunningTime="2026-03-10 19:08:02.038928585 +0000 UTC m=+1225.802364585" Mar 10 19:08:03 crc kubenswrapper[4861]: I0310 19:08:03.026610 4861 generic.go:334] "Generic (PLEG): container finished" podID="4dfe3138-3c9e-401e-8b34-7c0990829691" containerID="204dd70fb98d293a2d4953f3f39ed6ce4896a345296694989ea6431bab405965" exitCode=0 Mar 10 19:08:03 crc kubenswrapper[4861]: I0310 19:08:03.030061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" event={"ID":"4dfe3138-3c9e-401e-8b34-7c0990829691","Type":"ContainerDied","Data":"204dd70fb98d293a2d4953f3f39ed6ce4896a345296694989ea6431bab405965"} Mar 10 19:08:04 crc kubenswrapper[4861]: I0310 19:08:04.443319 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:04 crc kubenswrapper[4861]: I0310 19:08:04.612989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzs7s\" (UniqueName: \"kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s\") pod \"4dfe3138-3c9e-401e-8b34-7c0990829691\" (UID: \"4dfe3138-3c9e-401e-8b34-7c0990829691\") " Mar 10 19:08:04 crc kubenswrapper[4861]: I0310 19:08:04.622095 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s" (OuterVolumeSpecName: "kube-api-access-zzs7s") pod "4dfe3138-3c9e-401e-8b34-7c0990829691" (UID: "4dfe3138-3c9e-401e-8b34-7c0990829691"). InnerVolumeSpecName "kube-api-access-zzs7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:08:04 crc kubenswrapper[4861]: I0310 19:08:04.715516 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzs7s\" (UniqueName: \"kubernetes.io/projected/4dfe3138-3c9e-401e-8b34-7c0990829691-kube-api-access-zzs7s\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:05 crc kubenswrapper[4861]: I0310 19:08:05.049956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" event={"ID":"4dfe3138-3c9e-401e-8b34-7c0990829691","Type":"ContainerDied","Data":"4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f"} Mar 10 19:08:05 crc kubenswrapper[4861]: I0310 19:08:05.050018 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2ee38a85c62324ec44a9b8f27851a7257bcddbd657c1f2a83b24896947a44f" Mar 10 19:08:05 crc kubenswrapper[4861]: I0310 19:08:05.050602 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552828-8qbl9" Mar 10 19:08:05 crc kubenswrapper[4861]: I0310 19:08:05.540472 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552822-4blvc"] Mar 10 19:08:05 crc kubenswrapper[4861]: I0310 19:08:05.550767 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552822-4blvc"] Mar 10 19:08:06 crc kubenswrapper[4861]: I0310 19:08:06.989106 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63a5345-9a31-4af3-844d-d7766ae8413d" path="/var/lib/kubelet/pods/e63a5345-9a31-4af3-844d-d7766ae8413d/volumes" Mar 10 19:08:07 crc kubenswrapper[4861]: I0310 19:08:07.358461 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-z66mq" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.965741 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:24 crc kubenswrapper[4861]: E0310 19:08:24.967764 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfe3138-3c9e-401e-8b34-7c0990829691" containerName="oc" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.967865 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfe3138-3c9e-401e-8b34-7c0990829691" containerName="oc" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.968129 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dfe3138-3c9e-401e-8b34-7c0990829691" containerName="oc" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.969084 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.971367 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.971680 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.971855 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5stnj" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.971971 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 10 19:08:24 crc kubenswrapper[4861]: I0310 19:08:24.983310 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.020005 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.021155 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.027317 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.030350 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.051294 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.051344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fjsm\" (UniqueName: \"kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.051382 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.051427 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.051459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.152256 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.152317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.152351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.152367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fjsm\" (UniqueName: \"kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.152399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.153428 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.153448 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.154789 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.170884 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9\") pod \"dnsmasq-dns-589db6c89c-gz25g\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.170886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fjsm\" (UniqueName: \"kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm\") pod \"dnsmasq-dns-86bbd886cf-j8cqt\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.286957 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.339289 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.888799 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:25 crc kubenswrapper[4861]: W0310 19:08:25.893325 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f317cf6_a4e5_42c3_bf6c_a0546f9671d1.slice/crio-10546adf6b90035b89346660602c57f64096ea964a86d98a1fdfcb2b750eb340 WatchSource:0}: Error finding container 10546adf6b90035b89346660602c57f64096ea964a86d98a1fdfcb2b750eb340: Status 404 returned error can't find the container with id 10546adf6b90035b89346660602c57f64096ea964a86d98a1fdfcb2b750eb340 Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.897242 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:08:25 crc kubenswrapper[4861]: I0310 19:08:25.937362 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:25 crc kubenswrapper[4861]: W0310 19:08:25.948491 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56879dcf_80eb_4c91_8331_b659e3317080.slice/crio-05571df1b2bba304066be1db853e7d29ca3588186da8e05e326f7aec2ca732a3 WatchSource:0}: Error finding container 05571df1b2bba304066be1db853e7d29ca3588186da8e05e326f7aec2ca732a3: Status 404 returned error can't find the container with id 05571df1b2bba304066be1db853e7d29ca3588186da8e05e326f7aec2ca732a3 Mar 10 19:08:26 crc kubenswrapper[4861]: I0310 19:08:26.249001 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" event={"ID":"56879dcf-80eb-4c91-8331-b659e3317080","Type":"ContainerStarted","Data":"05571df1b2bba304066be1db853e7d29ca3588186da8e05e326f7aec2ca732a3"} Mar 10 19:08:26 crc kubenswrapper[4861]: I0310 19:08:26.250712 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" event={"ID":"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1","Type":"ContainerStarted","Data":"10546adf6b90035b89346660602c57f64096ea964a86d98a1fdfcb2b750eb340"} Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.249956 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.273181 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.280062 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.286578 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.392801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.392855 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzsj\" (UniqueName: \"kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.392960 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.494918 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.494973 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzsj\" (UniqueName: \"kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.495039 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.495871 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.496396 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.547869 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzsj\" (UniqueName: \"kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj\") pod \"dnsmasq-dns-78cb4465c9-psclv\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.575386 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.594175 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.595820 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.599812 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.604936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.802697 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.803208 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmbd\" (UniqueName: \"kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.803282 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.904775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.904865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkmbd\" (UniqueName: \"kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.904932 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.906314 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.906435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.926995 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkmbd\" (UniqueName: \"kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd\") pod \"dnsmasq-dns-7c47bcb9f9-w2smt\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:27 crc kubenswrapper[4861]: I0310 19:08:27.937775 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.090610 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.271091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" event={"ID":"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3","Type":"ContainerStarted","Data":"517f9a00476508c3e3ad180f63537b41e61bcf858505a78352cbf91301833393"} Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.375374 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:08:28 crc kubenswrapper[4861]: W0310 19:08:28.388328 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5adbfde9_b062_424b_8ab9_641e35ace118.slice/crio-02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990 WatchSource:0}: Error finding container 02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990: Status 404 returned error can't find the container with id 02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990 Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.451010 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.452622 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.455321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.455433 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mrvxr" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.456257 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.456598 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.456758 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.456835 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.457709 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.475736 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.617391 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.617781 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.617821 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.617870 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618060 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618256 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5nk\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618318 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618349 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.618632 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720454 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720501 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720546 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720578 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720602 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720621 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720668 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.720741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5nk\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.721265 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.723547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.724152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.724400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.724983 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.727877 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.731290 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.731309 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.741279 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.746333 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.746617 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5nk\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.749632 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.750895 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.752994 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4lndv" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.753191 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.753351 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.753488 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.753599 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.753865 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.755874 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.757038 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.762876 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.785861 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925543 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925578 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925598 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925620 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925675 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925699 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925728 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925748 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrlg\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:28 crc kubenswrapper[4861]: I0310 19:08:28.925797 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027370 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027395 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027431 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrlg\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027547 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027571 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.027593 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.029234 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.031083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.032267 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.032395 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.032827 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.033261 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.039440 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.039509 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.041735 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.051021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrlg\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.052603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.061981 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.127598 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.305655 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" event={"ID":"5adbfde9-b062-424b-8ab9-641e35ace118","Type":"ContainerStarted","Data":"02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990"} Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.336317 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:08:29 crc kubenswrapper[4861]: W0310 19:08:29.360221 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa4a97d_682a_40eb_93e0_5f5167ddb0a0.slice/crio-549b58ce7d4f3883bb5009154ba5810253e897aaa8b7405cbf0c188e6b32d565 WatchSource:0}: Error finding container 549b58ce7d4f3883bb5009154ba5810253e897aaa8b7405cbf0c188e6b32d565: Status 404 returned error can't find the container with id 549b58ce7d4f3883bb5009154ba5810253e897aaa8b7405cbf0c188e6b32d565 Mar 10 19:08:29 crc kubenswrapper[4861]: I0310 19:08:29.656044 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:08:29 crc kubenswrapper[4861]: W0310 19:08:29.659341 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ba95f55_3cea_4f0b_8f09_c6b4027789f8.slice/crio-efa6be39b016a0cada00444a802197641418fc51b79619b8bd800a40a8fb12ec WatchSource:0}: Error finding container efa6be39b016a0cada00444a802197641418fc51b79619b8bd800a40a8fb12ec: Status 404 returned error can't find the container with id efa6be39b016a0cada00444a802197641418fc51b79619b8bd800a40a8fb12ec Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.052944 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.054619 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.057550 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mt9h4" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.058090 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.059203 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.060412 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.065950 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.068730 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.148692 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.148768 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvk9\" (UniqueName: \"kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.148793 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.148808 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.148839 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.149055 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.149113 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.149261 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250282 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250325 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250382 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250416 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvk9\" (UniqueName: \"kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250457 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250470 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.250737 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.251013 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.251965 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.253344 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.257852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.258778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.267570 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvk9\" (UniqueName: \"kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.275033 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.283112 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " pod="openstack/openstack-galera-0" Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.371993 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerStarted","Data":"efa6be39b016a0cada00444a802197641418fc51b79619b8bd800a40a8fb12ec"} Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.374162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerStarted","Data":"549b58ce7d4f3883bb5009154ba5810253e897aaa8b7405cbf0c188e6b32d565"} Mar 10 19:08:30 crc kubenswrapper[4861]: I0310 19:08:30.375130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.398620 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.401478 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.404127 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.404432 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.404598 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2xqvk" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.406638 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.411527 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581447 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581511 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581541 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581567 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581593 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcwmd\" (UniqueName: \"kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581610 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581637 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.581653 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.638276 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.640572 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.644789 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.644978 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bq9nx" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.645219 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.646209 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683089 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683138 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683161 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcwmd\" (UniqueName: \"kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683202 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683229 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683269 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.683310 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.684363 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.684408 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.685568 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.686365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.687587 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.687941 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.689880 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.705100 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcwmd\" (UniqueName: \"kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.705340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.727561 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.784146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.784532 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.784568 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.784609 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.784630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqgx\" (UniqueName: \"kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.885849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.885899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.885966 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.886029 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.886055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqgx\" (UniqueName: \"kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.886931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.886977 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.889323 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.889541 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.903796 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqgx\" (UniqueName: \"kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx\") pod \"memcached-0\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " pod="openstack/memcached-0" Mar 10 19:08:31 crc kubenswrapper[4861]: I0310 19:08:31.985287 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.023986 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.025076 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.035869 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pj7fs" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.037509 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.128852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkg4b\" (UniqueName: \"kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b\") pod \"kube-state-metrics-0\" (UID: \"4793319d-2e64-4fda-9df8-9a97ff264050\") " pod="openstack/kube-state-metrics-0" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.231378 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkg4b\" (UniqueName: \"kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b\") pod \"kube-state-metrics-0\" (UID: \"4793319d-2e64-4fda-9df8-9a97ff264050\") " pod="openstack/kube-state-metrics-0" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.262090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkg4b\" (UniqueName: \"kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b\") pod \"kube-state-metrics-0\" (UID: \"4793319d-2e64-4fda-9df8-9a97ff264050\") " pod="openstack/kube-state-metrics-0" Mar 10 19:08:34 crc kubenswrapper[4861]: I0310 19:08:34.392502 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.192314 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.194373 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.197726 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-cvrvb" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.197893 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.198176 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.209058 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.221513 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.223359 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.294235 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.365832 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.365914 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366006 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366074 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfvl\" (UniqueName: \"kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366111 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366143 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366202 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366218 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366338 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366377 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kwpj\" (UniqueName: \"kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.366445 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.467643 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.467730 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfvl\" (UniqueName: \"kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.468459 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.468501 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.468546 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.468598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.468619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.469081 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.469119 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.469146 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kwpj\" (UniqueName: \"kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.469357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.470048 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.470216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.470936 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.470990 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.471366 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.471502 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.471590 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.472252 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.472407 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.472461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.472648 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.487854 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.490645 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.491197 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kwpj\" (UniqueName: \"kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj\") pod \"ovn-controller-zvlgw\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.492929 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfvl\" (UniqueName: \"kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl\") pod \"ovn-controller-ovs-cw7x8\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.563914 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:36 crc kubenswrapper[4861]: I0310 19:08:36.582328 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.898953 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.901343 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.904832 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.904947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wdmhp" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.904947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.905238 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.922193 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.923341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.977020 4861 scope.go:117] "RemoveContainer" containerID="1f6ab33cf920175a47ebe2665cdf82ea8479039d71240ea679f36a3b34114d9d" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996141 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996222 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996271 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996532 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996595 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996849 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8qv\" (UniqueName: \"kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:37 crc kubenswrapper[4861]: I0310 19:08:37.996968 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098582 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098789 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098958 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.098980 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.099007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.099047 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8qv\" (UniqueName: \"kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.099741 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.099953 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.100417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.101697 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.106108 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.109489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.114467 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.132801 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8qv\" (UniqueName: \"kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.138065 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:38 crc kubenswrapper[4861]: I0310 19:08:38.268564 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.573548 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.575384 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.579194 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.579735 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.579807 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f6k64" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.579904 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.589332 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642277 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75q9\" (UniqueName: \"kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642338 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642913 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.642945 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.643087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745723 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745757 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75q9\" (UniqueName: \"kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745781 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.745989 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.747005 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.747264 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.748764 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.752486 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.752852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.767130 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75q9\" (UniqueName: \"kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.774376 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.776881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:40 crc kubenswrapper[4861]: I0310 19:08:40.919331 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.478924 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.479590 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkmbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c47bcb9f9-w2smt_openstack(5adbfde9-b062-424b-8ab9-641e35ace118): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.480832 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.480951 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxnr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-589db6c89c-gz25g_openstack(5f317cf6-a4e5-42c3-bf6c-a0546f9671d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.481241 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.482067 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" podUID="5f317cf6-a4e5-42c3-bf6c-a0546f9671d1" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.489092 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.489210 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fjsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86bbd886cf-j8cqt_openstack(56879dcf-80eb-4c91-8331-b659e3317080): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.490339 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" podUID="56879dcf-80eb-4c91-8331-b659e3317080" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.516861 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.517357 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.517553 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qzsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78cb4465c9-psclv_openstack(6a51c99a-f2a2-4db2-8ac4-adf514cad8a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:47 crc kubenswrapper[4861]: E0310 19:08:47.518765 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.009904 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.523581 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.670838 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.671121 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7q5nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9fa4a97d-682a-40eb-93e0-5f5167ddb0a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.672383 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.710373 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.710614 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhrlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0ba95f55-3cea-4f0b-8f09-c6b4027789f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:08:48 crc kubenswrapper[4861]: E0310 19:08:48.712159 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" Mar 10 19:08:48 crc kubenswrapper[4861]: W0310 19:08:48.724681 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0b9c71c_6fe7_4eb9_9f95_87fce0f70caa.slice/crio-d57a1cf8c3616e6222bf4788727bf8702b35c72a689146bc27443a42321c57b5 WatchSource:0}: Error finding container d57a1cf8c3616e6222bf4788727bf8702b35c72a689146bc27443a42321c57b5: Status 404 returned error can't find the container with id d57a1cf8c3616e6222bf4788727bf8702b35c72a689146bc27443a42321c57b5 Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.891302 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.891764 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996274 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config\") pod \"56879dcf-80eb-4c91-8331-b659e3317080\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996632 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc\") pod \"56879dcf-80eb-4c91-8331-b659e3317080\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996722 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9\") pod \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996756 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config\") pod \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\" (UID: \"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1\") " Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996762 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config" (OuterVolumeSpecName: "config") pod "56879dcf-80eb-4c91-8331-b659e3317080" (UID: "56879dcf-80eb-4c91-8331-b659e3317080"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.996828 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fjsm\" (UniqueName: \"kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm\") pod \"56879dcf-80eb-4c91-8331-b659e3317080\" (UID: \"56879dcf-80eb-4c91-8331-b659e3317080\") " Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.997088 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56879dcf-80eb-4c91-8331-b659e3317080" (UID: "56879dcf-80eb-4c91-8331-b659e3317080"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.997319 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.997352 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56879dcf-80eb-4c91-8331-b659e3317080-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:48 crc kubenswrapper[4861]: I0310 19:08:48.998127 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config" (OuterVolumeSpecName: "config") pod "5f317cf6-a4e5-42c3-bf6c-a0546f9671d1" (UID: "5f317cf6-a4e5-42c3-bf6c-a0546f9671d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.002896 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9" (OuterVolumeSpecName: "kube-api-access-sxnr9") pod "5f317cf6-a4e5-42c3-bf6c-a0546f9671d1" (UID: "5f317cf6-a4e5-42c3-bf6c-a0546f9671d1"). InnerVolumeSpecName "kube-api-access-sxnr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.006045 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm" (OuterVolumeSpecName: "kube-api-access-5fjsm") pod "56879dcf-80eb-4c91-8331-b659e3317080" (UID: "56879dcf-80eb-4c91-8331-b659e3317080"). InnerVolumeSpecName "kube-api-access-5fjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.108856 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxnr9\" (UniqueName: \"kubernetes.io/projected/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-kube-api-access-sxnr9\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.108891 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.108901 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fjsm\" (UniqueName: \"kubernetes.io/projected/56879dcf-80eb-4c91-8331-b659e3317080-kube-api-access-5fjsm\") on node \"crc\" DevicePath \"\"" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.230698 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:08:49 crc kubenswrapper[4861]: W0310 19:08:49.238663 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726cec08_5661_4b62_8a44_028b015119e4.slice/crio-8eaff57e28da9ce732afb2c632b2cda4683b7db5fc5eb375d5d59e8e4a189f40 WatchSource:0}: Error finding container 8eaff57e28da9ce732afb2c632b2cda4683b7db5fc5eb375d5d59e8e4a189f40: Status 404 returned error can't find the container with id 8eaff57e28da9ce732afb2c632b2cda4683b7db5fc5eb375d5d59e8e4a189f40 Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.239452 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.405879 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:08:49 crc kubenswrapper[4861]: W0310 19:08:49.422018 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4793319d_2e64_4fda_9df8_9a97ff264050.slice/crio-2d8aaad856b4488c5eb5d11201c50f387cf6749255371f7a400f16c06feddca3 WatchSource:0}: Error finding container 2d8aaad856b4488c5eb5d11201c50f387cf6749255371f7a400f16c06feddca3: Status 404 returned error can't find the container with id 2d8aaad856b4488c5eb5d11201c50f387cf6749255371f7a400f16c06feddca3 Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.512231 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:08:49 crc kubenswrapper[4861]: W0310 19:08:49.520591 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533092ca_8a4b_4005_909c_32736cde1a1e.slice/crio-9249c9f0d5bf5899851668ab241bf4ebb4bd98e751d012356261ed81f1b846b2 WatchSource:0}: Error finding container 9249c9f0d5bf5899851668ab241bf4ebb4bd98e751d012356261ed81f1b846b2: Status 404 returned error can't find the container with id 9249c9f0d5bf5899851668ab241bf4ebb4bd98e751d012356261ed81f1b846b2 Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.529038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerStarted","Data":"d57a1cf8c3616e6222bf4788727bf8702b35c72a689146bc27443a42321c57b5"} Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.529762 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" event={"ID":"5f317cf6-a4e5-42c3-bf6c-a0546f9671d1","Type":"ContainerDied","Data":"10546adf6b90035b89346660602c57f64096ea964a86d98a1fdfcb2b750eb340"} Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.529808 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.529870 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-gz25g" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.530958 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerStarted","Data":"0ac2105ab841fa2f06009a7ad42a27505a728c3e994a1f87d8941b356bea3825"} Mar 10 19:08:49 crc kubenswrapper[4861]: W0310 19:08:49.532129 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41fdd1f2_c0e5_4dbe_aa18_bd6dd1a4754d.slice/crio-2ee5d55f504e4dc95112ec49dcf7ce204b8fe01a76fea22f469ef08561767e35 WatchSource:0}: Error finding container 2ee5d55f504e4dc95112ec49dcf7ce204b8fe01a76fea22f469ef08561767e35: Status 404 returned error can't find the container with id 2ee5d55f504e4dc95112ec49dcf7ce204b8fe01a76fea22f469ef08561767e35 Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.534262 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.534278 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-j8cqt" event={"ID":"56879dcf-80eb-4c91-8331-b659e3317080","Type":"ContainerDied","Data":"05571df1b2bba304066be1db853e7d29ca3588186da8e05e326f7aec2ca732a3"} Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.536069 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw" event={"ID":"726cec08-5661-4b62-8a44-028b015119e4","Type":"ContainerStarted","Data":"8eaff57e28da9ce732afb2c632b2cda4683b7db5fc5eb375d5d59e8e4a189f40"} Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.543417 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4793319d-2e64-4fda-9df8-9a97ff264050","Type":"ContainerStarted","Data":"2d8aaad856b4488c5eb5d11201c50f387cf6749255371f7a400f16c06feddca3"} Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.544472 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerStarted","Data":"9249c9f0d5bf5899851668ab241bf4ebb4bd98e751d012356261ed81f1b846b2"} Mar 10 19:08:49 crc kubenswrapper[4861]: E0310 19:08:49.565209 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" Mar 10 19:08:49 crc kubenswrapper[4861]: E0310 19:08:49.566086 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:9e7397d61095b02a8c1deb24bca874bc0032aa18019f12d53e0eda8998b85447\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.566147 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.575266 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-gz25g"] Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.588144 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:49 crc kubenswrapper[4861]: I0310 19:08:49.593889 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-j8cqt"] Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.247601 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.379612 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.554659 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d","Type":"ContainerStarted","Data":"2ee5d55f504e4dc95112ec49dcf7ce204b8fe01a76fea22f469ef08561767e35"} Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.556083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerStarted","Data":"a2ee8caa35388a3dd3f25affde057556c4b2eaa817152c08ef7ee1c73dd4be03"} Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.557158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerStarted","Data":"5e4a26d7e404af97cbdcaf5e1dc8f3742995e948494f6e7689506efa72981cfc"} Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.970195 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56879dcf-80eb-4c91-8331-b659e3317080" path="/var/lib/kubelet/pods/56879dcf-80eb-4c91-8331-b659e3317080/volumes" Mar 10 19:08:50 crc kubenswrapper[4861]: I0310 19:08:50.970834 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f317cf6-a4e5-42c3-bf6c-a0546f9671d1" path="/var/lib/kubelet/pods/5f317cf6-a4e5-42c3-bf6c-a0546f9671d1/volumes" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.610309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4793319d-2e64-4fda-9df8-9a97ff264050","Type":"ContainerStarted","Data":"9b6a3cf511e38106b8f47160427dc36b1287d8d3fd9359ffd37bdde81a64a865"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.610766 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.611598 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerStarted","Data":"699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.613886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerStarted","Data":"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.615926 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerStarted","Data":"4e50a9575b6dbc384ba88128df76db7281b44b74dee1550f25daab4f4ad79abc"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.617173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw" event={"ID":"726cec08-5661-4b62-8a44-028b015119e4","Type":"ContainerStarted","Data":"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.617270 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zvlgw" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.618370 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerStarted","Data":"594bf60bf37e70def6a72f43c087341154de5553af17dbc6db5e3efddc38f18f"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.619640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerStarted","Data":"d0df70b068b7e2c89545406ff08c2cfd24ffaf8b8337d1abdd32af448ab278bc"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.621097 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d","Type":"ContainerStarted","Data":"e208585919e167041ae926b5358edd36d959771ed35af5c54fe2d53fe8efa41a"} Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.621261 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.634285 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.905755098 podStartE2EDuration="22.634266654s" podCreationTimestamp="2026-03-10 19:08:34 +0000 UTC" firstStartedPulling="2026-03-10 19:08:49.424358446 +0000 UTC m=+1273.187794406" lastFinishedPulling="2026-03-10 19:08:56.152869962 +0000 UTC m=+1279.916305962" observedRunningTime="2026-03-10 19:08:56.628580936 +0000 UTC m=+1280.392016916" watchObservedRunningTime="2026-03-10 19:08:56.634266654 +0000 UTC m=+1280.397702614" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.650806 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.093632818 podStartE2EDuration="25.650789073s" podCreationTimestamp="2026-03-10 19:08:31 +0000 UTC" firstStartedPulling="2026-03-10 19:08:49.533387937 +0000 UTC m=+1273.296823897" lastFinishedPulling="2026-03-10 19:08:55.090544192 +0000 UTC m=+1278.853980152" observedRunningTime="2026-03-10 19:08:56.64600656 +0000 UTC m=+1280.409442530" watchObservedRunningTime="2026-03-10 19:08:56.650789073 +0000 UTC m=+1280.414225023" Mar 10 19:08:56 crc kubenswrapper[4861]: I0310 19:08:56.690820 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zvlgw" podStartSLOduration=14.448014762 podStartE2EDuration="20.690801835s" podCreationTimestamp="2026-03-10 19:08:36 +0000 UTC" firstStartedPulling="2026-03-10 19:08:49.242686577 +0000 UTC m=+1273.006122537" lastFinishedPulling="2026-03-10 19:08:55.48547365 +0000 UTC m=+1279.248909610" observedRunningTime="2026-03-10 19:08:56.689913231 +0000 UTC m=+1280.453349191" watchObservedRunningTime="2026-03-10 19:08:56.690801835 +0000 UTC m=+1280.454237795" Mar 10 19:08:57 crc kubenswrapper[4861]: I0310 19:08:57.628086 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerID="594bf60bf37e70def6a72f43c087341154de5553af17dbc6db5e3efddc38f18f" exitCode=0 Mar 10 19:08:57 crc kubenswrapper[4861]: I0310 19:08:57.629339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerDied","Data":"594bf60bf37e70def6a72f43c087341154de5553af17dbc6db5e3efddc38f18f"} Mar 10 19:08:58 crc kubenswrapper[4861]: I0310 19:08:58.637828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerStarted","Data":"ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc"} Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.650871 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerStarted","Data":"fe5ee7032ce5d91ee136401e0e5a8ee13f7696e2e7c6cc9ccb1c1348c985d7d0"} Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.653669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerStarted","Data":"563ee7a1f5b83d7d42165f459dccecfc60f2f551d3f0e59ace85417509a8ec48"} Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.656734 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.656765 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.676362 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.857812792 podStartE2EDuration="20.676348096s" podCreationTimestamp="2026-03-10 19:08:39 +0000 UTC" firstStartedPulling="2026-03-10 19:08:50.538744393 +0000 UTC m=+1274.302180353" lastFinishedPulling="2026-03-10 19:08:59.357279667 +0000 UTC m=+1283.120715657" observedRunningTime="2026-03-10 19:08:59.674136384 +0000 UTC m=+1283.437572374" watchObservedRunningTime="2026-03-10 19:08:59.676348096 +0000 UTC m=+1283.439784066" Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.705814 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cw7x8" podStartSLOduration=18.177542143 podStartE2EDuration="23.705793504s" podCreationTimestamp="2026-03-10 19:08:36 +0000 UTC" firstStartedPulling="2026-03-10 19:08:50.537716225 +0000 UTC m=+1274.301152185" lastFinishedPulling="2026-03-10 19:08:56.065967566 +0000 UTC m=+1279.829403546" observedRunningTime="2026-03-10 19:08:59.699433378 +0000 UTC m=+1283.462869358" watchObservedRunningTime="2026-03-10 19:08:59.705793504 +0000 UTC m=+1283.469229474" Mar 10 19:08:59 crc kubenswrapper[4861]: I0310 19:08:59.728609 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.915787057 podStartE2EDuration="23.728590479s" podCreationTimestamp="2026-03-10 19:08:36 +0000 UTC" firstStartedPulling="2026-03-10 19:08:49.52666576 +0000 UTC m=+1273.290101730" lastFinishedPulling="2026-03-10 19:08:59.339469172 +0000 UTC m=+1283.102905152" observedRunningTime="2026-03-10 19:08:59.725626036 +0000 UTC m=+1283.489062026" watchObservedRunningTime="2026-03-10 19:08:59.728590479 +0000 UTC m=+1283.492026449" Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.669013 4861 generic.go:334] "Generic (PLEG): container finished" podID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerID="e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1" exitCode=0 Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.669366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerDied","Data":"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1"} Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.672994 4861 generic.go:334] "Generic (PLEG): container finished" podID="2683e959-ecff-478e-aa0a-acf18f482d39" containerID="4e50a9575b6dbc384ba88128df76db7281b44b74dee1550f25daab4f4ad79abc" exitCode=0 Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.673045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerDied","Data":"4e50a9575b6dbc384ba88128df76db7281b44b74dee1550f25daab4f4ad79abc"} Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.691731 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerStarted","Data":"3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb"} Mar 10 19:09:00 crc kubenswrapper[4861]: I0310 19:09:00.920659 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.703141 4861 generic.go:334] "Generic (PLEG): container finished" podID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerID="da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101" exitCode=0 Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.703231 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" event={"ID":"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3","Type":"ContainerDied","Data":"da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101"} Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.706774 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerStarted","Data":"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e"} Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.709231 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerStarted","Data":"2a8c0c46e768dafa2c342d24605ecb36c119cc70e7586a4c3f59339b29b329fe"} Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.728077 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.728182 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.765367 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.106401363 podStartE2EDuration="32.765342455s" podCreationTimestamp="2026-03-10 19:08:29 +0000 UTC" firstStartedPulling="2026-03-10 19:08:48.7305413 +0000 UTC m=+1272.493977300" lastFinishedPulling="2026-03-10 19:08:55.389482422 +0000 UTC m=+1279.152918392" observedRunningTime="2026-03-10 19:09:01.754583066 +0000 UTC m=+1285.518019066" watchObservedRunningTime="2026-03-10 19:09:01.765342455 +0000 UTC m=+1285.528778445" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.779405 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.430259777 podStartE2EDuration="31.779379456s" podCreationTimestamp="2026-03-10 19:08:30 +0000 UTC" firstStartedPulling="2026-03-10 19:08:49.237847012 +0000 UTC m=+1273.001282982" lastFinishedPulling="2026-03-10 19:08:55.586966701 +0000 UTC m=+1279.350402661" observedRunningTime="2026-03-10 19:09:01.776689561 +0000 UTC m=+1285.540125561" watchObservedRunningTime="2026-03-10 19:09:01.779379456 +0000 UTC m=+1285.542815426" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.923356 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 10 19:09:01 crc kubenswrapper[4861]: I0310 19:09:01.972032 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.005001 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.269084 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.315218 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.719952 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" event={"ID":"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3","Type":"ContainerStarted","Data":"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67"} Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.720192 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.722442 4861 generic.go:334] "Generic (PLEG): container finished" podID="5adbfde9-b062-424b-8ab9-641e35ace118" containerID="3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a" exitCode=0 Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.722504 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" event={"ID":"5adbfde9-b062-424b-8ab9-641e35ace118","Type":"ContainerDied","Data":"3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a"} Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.723334 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.757683 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" podStartSLOduration=3.408551153 podStartE2EDuration="35.757661359s" podCreationTimestamp="2026-03-10 19:08:27 +0000 UTC" firstStartedPulling="2026-03-10 19:08:28.106650926 +0000 UTC m=+1251.870086896" lastFinishedPulling="2026-03-10 19:09:00.455761112 +0000 UTC m=+1284.219197102" observedRunningTime="2026-03-10 19:09:02.752955758 +0000 UTC m=+1286.516391758" watchObservedRunningTime="2026-03-10 19:09:02.757661359 +0000 UTC m=+1286.521097349" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.792310 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.795523 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.972305 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.990030 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.991260 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:02 crc kubenswrapper[4861]: I0310 19:09:02.992813 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.044885 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.049933 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.050906 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.054696 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.054803 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxtp\" (UniqueName: \"kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.054922 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.054959 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.061036 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.068606 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.134936 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.154938 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156335 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156672 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156731 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156754 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156784 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz75m\" (UniqueName: \"kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.156856 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqxtp\" (UniqueName: \"kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.157822 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.157954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.158163 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.163236 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.185049 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.191565 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqxtp\" (UniqueName: \"kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp\") pod \"dnsmasq-dns-6444958b7f-8hgpw\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.218227 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.219650 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.221625 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5t49z" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.224733 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.224738 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.224941 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.227182 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258325 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258387 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258424 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258448 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258463 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258480 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz75m\" (UniqueName: \"kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258497 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258513 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258587 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258608 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gwj\" (UniqueName: \"kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258629 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258645 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7nl\" (UniqueName: \"kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258667 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258691 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.258729 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.259019 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.259075 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.259655 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.262286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.262487 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.277128 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz75m\" (UniqueName: \"kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m\") pod \"ovn-controller-metrics-8h5gk\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.331144 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360111 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360162 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360216 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360267 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gwj\" (UniqueName: \"kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360289 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360306 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7nl\" (UniqueName: \"kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360328 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360399 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.360417 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.361281 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.361927 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.362156 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.362874 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.362951 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.363056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.364025 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.364478 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.366236 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.367063 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.377124 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.379732 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7nl\" (UniqueName: \"kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl\") pod \"dnsmasq-dns-7b57d9888c-jd4f8\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.382059 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gwj\" (UniqueName: \"kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj\") pod \"ovn-northd-0\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.480960 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.544224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.742515 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="dnsmasq-dns" containerID="cri-o://259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534" gracePeriod=10 Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.742848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" event={"ID":"5adbfde9-b062-424b-8ab9-641e35ace118","Type":"ContainerStarted","Data":"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534"} Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.744158 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.767614 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" podStartSLOduration=-9223372000.087181 podStartE2EDuration="36.767594202s" podCreationTimestamp="2026-03-10 19:08:27 +0000 UTC" firstStartedPulling="2026-03-10 19:08:28.394337722 +0000 UTC m=+1252.157773682" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:03.761629217 +0000 UTC m=+1287.525065177" watchObservedRunningTime="2026-03-10 19:09:03.767594202 +0000 UTC m=+1287.531030162" Mar 10 19:09:03 crc kubenswrapper[4861]: W0310 19:09:03.826030 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda02c50a_02b8_4363_9e5b_8a8c7c3c9b83.slice/crio-6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca WatchSource:0}: Error finding container 6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca: Status 404 returned error can't find the container with id 6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.852630 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.881919 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:09:03 crc kubenswrapper[4861]: W0310 19:09:03.904678 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c73a13_57f7_43aa_8e0a_ba36a3195653.slice/crio-2be44c1ed9c4193f97620f49ef77cea816423619f970e2c9516f5443a2a25111 WatchSource:0}: Error finding container 2be44c1ed9c4193f97620f49ef77cea816423619f970e2c9516f5443a2a25111: Status 404 returned error can't find the container with id 2be44c1ed9c4193f97620f49ef77cea816423619f970e2c9516f5443a2a25111 Mar 10 19:09:03 crc kubenswrapper[4861]: I0310 19:09:03.984613 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:03 crc kubenswrapper[4861]: W0310 19:09:03.985847 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb33ea429_fdf5_482e_99b6_b4f7d0103988.slice/crio-ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f WatchSource:0}: Error finding container ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f: Status 404 returned error can't find the container with id ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.134436 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:09:04 crc kubenswrapper[4861]: W0310 19:09:04.153217 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c1ba054_6941_4e52_b792_250287f25d92.slice/crio-2cc47d55fa9b32dc5f059325991ac722d74e10227789869c3cdcd5c79c8737e8 WatchSource:0}: Error finding container 2cc47d55fa9b32dc5f059325991ac722d74e10227789869c3cdcd5c79c8737e8: Status 404 returned error can't find the container with id 2cc47d55fa9b32dc5f059325991ac722d74e10227789869c3cdcd5c79c8737e8 Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.165479 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.289007 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config\") pod \"5adbfde9-b062-424b-8ab9-641e35ace118\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.289079 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkmbd\" (UniqueName: \"kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd\") pod \"5adbfde9-b062-424b-8ab9-641e35ace118\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.289172 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc\") pod \"5adbfde9-b062-424b-8ab9-641e35ace118\" (UID: \"5adbfde9-b062-424b-8ab9-641e35ace118\") " Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.294453 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd" (OuterVolumeSpecName: "kube-api-access-kkmbd") pod "5adbfde9-b062-424b-8ab9-641e35ace118" (UID: "5adbfde9-b062-424b-8ab9-641e35ace118"). InnerVolumeSpecName "kube-api-access-kkmbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.351499 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.389105 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:09:04 crc kubenswrapper[4861]: E0310 19:09:04.390488 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="dnsmasq-dns" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.390509 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="dnsmasq-dns" Mar 10 19:09:04 crc kubenswrapper[4861]: E0310 19:09:04.390554 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="init" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.390561 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="init" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.390720 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" containerName="dnsmasq-dns" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.391650 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.391958 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkmbd\" (UniqueName: \"kubernetes.io/projected/5adbfde9-b062-424b-8ab9-641e35ace118-kube-api-access-kkmbd\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.392853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5adbfde9-b062-424b-8ab9-641e35ace118" (UID: "5adbfde9-b062-424b-8ab9-641e35ace118"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.397050 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.406757 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.410321 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config" (OuterVolumeSpecName: "config") pod "5adbfde9-b062-424b-8ab9-641e35ace118" (UID: "5adbfde9-b062-424b-8ab9-641e35ace118"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.493552 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.493839 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtkm\" (UniqueName: \"kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.493858 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.493946 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.493988 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.494125 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.494141 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5adbfde9-b062-424b-8ab9-641e35ace118-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.595115 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.595426 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.595464 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.595554 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.595585 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtkm\" (UniqueName: \"kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.596237 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.596700 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.596836 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.597687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.613278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtkm\" (UniqueName: \"kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm\") pod \"dnsmasq-dns-675f7dd995-bdz4g\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.745471 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.749893 4861 generic.go:334] "Generic (PLEG): container finished" podID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" containerID="89c07d2c5456b1aaee8d060db586fcee7a840d3278a1f77243a25ca9d94be56a" exitCode=0 Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.749973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" event={"ID":"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83","Type":"ContainerDied","Data":"89c07d2c5456b1aaee8d060db586fcee7a840d3278a1f77243a25ca9d94be56a"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.750007 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" event={"ID":"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83","Type":"ContainerStarted","Data":"6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.751631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8h5gk" event={"ID":"d0c73a13-57f7-43aa-8e0a-ba36a3195653","Type":"ContainerStarted","Data":"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.751656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8h5gk" event={"ID":"d0c73a13-57f7-43aa-8e0a-ba36a3195653","Type":"ContainerStarted","Data":"2be44c1ed9c4193f97620f49ef77cea816423619f970e2c9516f5443a2a25111"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.762261 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerStarted","Data":"2cc47d55fa9b32dc5f059325991ac722d74e10227789869c3cdcd5c79c8737e8"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.763857 4861 generic.go:334] "Generic (PLEG): container finished" podID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerID="975c505e713b68ae4a324f48e5d603ec69d77687651e04e9c35d417d5c0b831b" exitCode=0 Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.763901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" event={"ID":"b33ea429-fdf5-482e-99b6-b4f7d0103988","Type":"ContainerDied","Data":"975c505e713b68ae4a324f48e5d603ec69d77687651e04e9c35d417d5c0b831b"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.763917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" event={"ID":"b33ea429-fdf5-482e-99b6-b4f7d0103988","Type":"ContainerStarted","Data":"ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.766954 4861 generic.go:334] "Generic (PLEG): container finished" podID="5adbfde9-b062-424b-8ab9-641e35ace118" containerID="259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534" exitCode=0 Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.767102 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" event={"ID":"5adbfde9-b062-424b-8ab9-641e35ace118","Type":"ContainerDied","Data":"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.767153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" event={"ID":"5adbfde9-b062-424b-8ab9-641e35ace118","Type":"ContainerDied","Data":"02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990"} Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.767171 4861 scope.go:117] "RemoveContainer" containerID="259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.767338 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-w2smt" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.767941 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="dnsmasq-dns" containerID="cri-o://092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67" gracePeriod=10 Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.883312 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8h5gk" podStartSLOduration=1.883293057 podStartE2EDuration="1.883293057s" podCreationTimestamp="2026-03-10 19:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:04.818065533 +0000 UTC m=+1288.581501503" watchObservedRunningTime="2026-03-10 19:09:04.883293057 +0000 UTC m=+1288.646729017" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.931167 4861 scope.go:117] "RemoveContainer" containerID="3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a" Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.954194 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:09:04 crc kubenswrapper[4861]: I0310 19:09:04.976194 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-w2smt"] Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.030569 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5adbfde9_b062_424b_8ab9_641e35ace118.slice/crio-02d33a97bff4c357ae0acea20eb36df9cff4d2966945df049d1d73e22bea1990\": RecentStats: unable to find data in memory cache]" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.033022 4861 scope.go:117] "RemoveContainer" containerID="259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.036670 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534\": container with ID starting with 259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534 not found: ID does not exist" containerID="259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.036736 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534"} err="failed to get container status \"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534\": rpc error: code = NotFound desc = could not find container \"259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534\": container with ID starting with 259e1d3fcad10e4523abbd7f110faaf6bcd0cf38b15a1593075b0f17c618c534 not found: ID does not exist" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.036769 4861 scope.go:117] "RemoveContainer" containerID="3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.037019 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a\": container with ID starting with 3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a not found: ID does not exist" containerID="3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.037047 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a"} err="failed to get container status \"3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a\": rpc error: code = NotFound desc = could not find container \"3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a\": container with ID starting with 3d6970b81612809ffbfdf841eee0ee6beb2fde50772d889e9f0cad3832397f8a not found: ID does not exist" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.110870 4861 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 10 19:09:05 crc kubenswrapper[4861]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 10 19:09:05 crc kubenswrapper[4861]: > podSandboxID="6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.111414 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:09:05 crc kubenswrapper[4861]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh5d7h8hd8h664h564hfbh5d4h5f5h55h5fch66h675hb8h65bh64dhbh5dchc9h66fh5dbhf4h658h64ch55bhbh65h55dh597h68dh579hbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqxtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6444958b7f-8hgpw_openstack(da02c50a-02b8-4363-9e5b-8a8c7c3c9b83): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 10 19:09:05 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.113342 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" podUID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.127179 4861 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 10 19:09:05 crc kubenswrapper[4861]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b33ea429-fdf5-482e-99b6-b4f7d0103988/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 10 19:09:05 crc kubenswrapper[4861]: > podSandboxID="ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.127289 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:09:05 crc kubenswrapper[4861]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w7nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7b57d9888c-jd4f8_openstack(b33ea429-fdf5-482e-99b6-b4f7d0103988): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b33ea429-fdf5-482e-99b6-b4f7d0103988/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 10 19:09:05 crc kubenswrapper[4861]: > logger="UnhandledError" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.128408 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b33ea429-fdf5-482e-99b6-b4f7d0103988/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.333387 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.334043 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.420517 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzsj\" (UniqueName: \"kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj\") pod \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.420781 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config\") pod \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.420874 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc\") pod \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\" (UID: \"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3\") " Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.426029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj" (OuterVolumeSpecName: "kube-api-access-9qzsj") pod "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" (UID: "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3"). InnerVolumeSpecName "kube-api-access-9qzsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.458058 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config" (OuterVolumeSpecName: "config") pod "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" (UID: "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.479201 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" (UID: "6a51c99a-f2a2-4db2-8ac4-adf514cad8a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.522364 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.522391 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzsj\" (UniqueName: \"kubernetes.io/projected/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-kube-api-access-9qzsj\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.522401 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.575500 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.576106 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="init" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.576125 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="init" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.576144 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="dnsmasq-dns" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.576151 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="dnsmasq-dns" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.576301 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerName="dnsmasq-dns" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.580643 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.581353 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.584396 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.584527 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.584640 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-d2tvz" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.584762 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724495 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724599 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724640 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mz6p\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.724664 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.774840 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" event={"ID":"f3cf759d-7dce-473b-b790-ae9e344c2245","Type":"ContainerStarted","Data":"e131c30fec362d60a9a318de68170014a5e1b763eaf17decb87081cd0e783c25"} Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.777517 4861 generic.go:334] "Generic (PLEG): container finished" podID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" containerID="092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67" exitCode=0 Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.777583 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.777658 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" event={"ID":"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3","Type":"ContainerDied","Data":"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67"} Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.777696 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb4465c9-psclv" event={"ID":"6a51c99a-f2a2-4db2-8ac4-adf514cad8a3","Type":"ContainerDied","Data":"517f9a00476508c3e3ad180f63537b41e61bcf858505a78352cbf91301833393"} Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.777723 4861 scope.go:117] "RemoveContainer" containerID="092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.817370 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.822792 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cb4465c9-psclv"] Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827390 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mz6p\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827415 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827465 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.827484 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.828240 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.828340 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.828358 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 10 19:09:05 crc kubenswrapper[4861]: E0310 19:09:05.828394 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift podName:04bbfc10-7f55-45a5-8a53-70e994a09bc9 nodeName:}" failed. No retries permitted until 2026-03-10 19:09:06.328380257 +0000 UTC m=+1290.091816217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift") pod "swift-storage-0" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9") : configmap "swift-ring-files" not found Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.828798 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.828991 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.842823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.843338 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mz6p\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.851645 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:05 crc kubenswrapper[4861]: I0310 19:09:05.963241 4861 scope.go:117] "RemoveContainer" containerID="da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.111359 4861 scope.go:117] "RemoveContainer" containerID="092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67" Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.111942 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67\": container with ID starting with 092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67 not found: ID does not exist" containerID="092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.111987 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67"} err="failed to get container status \"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67\": rpc error: code = NotFound desc = could not find container \"092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67\": container with ID starting with 092c35e59e4e9b4ab57e38a9d2a27794808d5eb36665230282f1036b357e7f67 not found: ID does not exist" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.112013 4861 scope.go:117] "RemoveContainer" containerID="da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101" Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.113330 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101\": container with ID starting with da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101 not found: ID does not exist" containerID="da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.113357 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101"} err="failed to get container status \"da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101\": rpc error: code = NotFound desc = could not find container \"da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101\": container with ID starting with da972686c489a400bdd60917f7e987ae7c70062d472054aa6bd5ec1b2be8a101 not found: ID does not exist" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.150470 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.164965 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jvlmr"] Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.165337 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" containerName="init" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.165354 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" containerName="init" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.165527 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" containerName="init" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.174179 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.185147 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.186776 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.187591 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jvlmr"] Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.189676 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.232004 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb\") pod \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.232089 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqxtp\" (UniqueName: \"kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp\") pod \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.232198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc\") pod \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.232289 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config\") pod \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\" (UID: \"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83\") " Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.237575 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp" (OuterVolumeSpecName: "kube-api-access-cqxtp") pod "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" (UID: "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83"). InnerVolumeSpecName "kube-api-access-cqxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.267595 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config" (OuterVolumeSpecName: "config") pod "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" (UID: "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.283652 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" (UID: "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.284167 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" (UID: "da02c50a-02b8-4363-9e5b-8a8c7c3c9b83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333515 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333565 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333583 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333615 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qvd\" (UniqueName: \"kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333756 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333792 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.333814 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.333838 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 10 19:09:06 crc kubenswrapper[4861]: E0310 19:09:06.333883 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift podName:04bbfc10-7f55-45a5-8a53-70e994a09bc9 nodeName:}" failed. No retries permitted until 2026-03-10 19:09:07.333866098 +0000 UTC m=+1291.097302158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift") pod "swift-storage-0" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9") : configmap "swift-ring-files" not found Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333906 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.333938 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.334094 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.334108 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.334118 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqxtp\" (UniqueName: \"kubernetes.io/projected/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-kube-api-access-cqxtp\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.334127 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.435533 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qvd\" (UniqueName: \"kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.435759 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.435854 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.435935 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.436024 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.436176 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.436245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.436687 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.436882 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.437099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.439087 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.440084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.440165 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.451382 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qvd\" (UniqueName: \"kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd\") pod \"swift-ring-rebalance-jvlmr\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.496095 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.787612 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.790851 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-8hgpw" event={"ID":"da02c50a-02b8-4363-9e5b-8a8c7c3c9b83","Type":"ContainerDied","Data":"6c7a2e152bd974329182875f6cd9cb4d9bcf8cd231a421821280fb3fd6b88fca"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.790915 4861 scope.go:117] "RemoveContainer" containerID="89c07d2c5456b1aaee8d060db586fcee7a840d3278a1f77243a25ca9d94be56a" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.793271 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerStarted","Data":"6d99d0616ecc46c18b64d18128318acd7044610e3f4e749ae04d9ed4478404a9"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.803473 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerStarted","Data":"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.803509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerStarted","Data":"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.804219 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.807165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" event={"ID":"b33ea429-fdf5-482e-99b6-b4f7d0103988","Type":"ContainerStarted","Data":"5ee7bbb3af9d004520d5286c816a8234f0eb60c0e151bd1f7661638abcb2d8da"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.808117 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.812814 4861 generic.go:334] "Generic (PLEG): container finished" podID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerID="5503e1e395d471251b6899856a4db0244076b1b7ddb519e38ef971737aa6ee17" exitCode=0 Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.813577 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" event={"ID":"f3cf759d-7dce-473b-b790-ae9e344c2245","Type":"ContainerDied","Data":"5503e1e395d471251b6899856a4db0244076b1b7ddb519e38ef971737aa6ee17"} Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.909074 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.916545 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-8hgpw"] Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.917615 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" podStartSLOduration=3.917597985 podStartE2EDuration="3.917597985s" podCreationTimestamp="2026-03-10 19:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:06.90987133 +0000 UTC m=+1290.673307300" watchObservedRunningTime="2026-03-10 19:09:06.917597985 +0000 UTC m=+1290.681033945" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.952196 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.001568995 podStartE2EDuration="3.952172556s" podCreationTimestamp="2026-03-10 19:09:03 +0000 UTC" firstStartedPulling="2026-03-10 19:09:04.161878813 +0000 UTC m=+1287.925314773" lastFinishedPulling="2026-03-10 19:09:06.112482384 +0000 UTC m=+1289.875918334" observedRunningTime="2026-03-10 19:09:06.933749344 +0000 UTC m=+1290.697185304" watchObservedRunningTime="2026-03-10 19:09:06.952172556 +0000 UTC m=+1290.715608516" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.972961 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adbfde9-b062-424b-8ab9-641e35ace118" path="/var/lib/kubelet/pods/5adbfde9-b062-424b-8ab9-641e35ace118/volumes" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.973507 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a51c99a-f2a2-4db2-8ac4-adf514cad8a3" path="/var/lib/kubelet/pods/6a51c99a-f2a2-4db2-8ac4-adf514cad8a3/volumes" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.974061 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da02c50a-02b8-4363-9e5b-8a8c7c3c9b83" path="/var/lib/kubelet/pods/da02c50a-02b8-4363-9e5b-8a8c7c3c9b83/volumes" Mar 10 19:09:06 crc kubenswrapper[4861]: I0310 19:09:06.975041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jvlmr"] Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.350919 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:07 crc kubenswrapper[4861]: E0310 19:09:07.351515 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 10 19:09:07 crc kubenswrapper[4861]: E0310 19:09:07.351585 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 10 19:09:07 crc kubenswrapper[4861]: E0310 19:09:07.351700 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift podName:04bbfc10-7f55-45a5-8a53-70e994a09bc9 nodeName:}" failed. No retries permitted until 2026-03-10 19:09:09.351664571 +0000 UTC m=+1293.115100581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift") pod "swift-storage-0" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9") : configmap "swift-ring-files" not found Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.829476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" event={"ID":"f3cf759d-7dce-473b-b790-ae9e344c2245","Type":"ContainerStarted","Data":"ba81fb9f78de7159f39585f455b8f2a47d296bdae53d5967f444e268d0eb57e4"} Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.829672 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.833764 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerStarted","Data":"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899"} Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.835347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jvlmr" event={"ID":"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4","Type":"ContainerStarted","Data":"0be2c4e6acaa5639fd83bdbeee71d5f3993f89f61d8f39176c09fc5082400c15"} Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.893359 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" podStartSLOduration=3.893337648 podStartE2EDuration="3.893337648s" podCreationTimestamp="2026-03-10 19:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:07.884146573 +0000 UTC m=+1291.647582563" watchObservedRunningTime="2026-03-10 19:09:07.893337648 +0000 UTC m=+1291.656773628" Mar 10 19:09:07 crc kubenswrapper[4861]: I0310 19:09:07.939343 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 10 19:09:08 crc kubenswrapper[4861]: I0310 19:09:08.073456 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 10 19:09:09 crc kubenswrapper[4861]: I0310 19:09:09.405734 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:09 crc kubenswrapper[4861]: E0310 19:09:09.405877 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 10 19:09:09 crc kubenswrapper[4861]: E0310 19:09:09.406218 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 10 19:09:09 crc kubenswrapper[4861]: E0310 19:09:09.406296 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift podName:04bbfc10-7f55-45a5-8a53-70e994a09bc9 nodeName:}" failed. No retries permitted until 2026-03-10 19:09:13.406274144 +0000 UTC m=+1297.169710114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift") pod "swift-storage-0" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9") : configmap "swift-ring-files" not found Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.375781 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.375947 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.480142 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.489508 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vjxc7"] Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.490584 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.492499 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.497185 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vjxc7"] Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.529730 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrtl\" (UniqueName: \"kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.529841 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.631780 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.631939 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrtl\" (UniqueName: \"kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.633069 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.652029 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrtl\" (UniqueName: \"kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl\") pod \"root-account-create-update-vjxc7\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:10 crc kubenswrapper[4861]: I0310 19:09:10.840520 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.056695 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 10 19:09:11 crc kubenswrapper[4861]: W0310 19:09:11.312992 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8710a812_40f8_4a31_ac00_99eebd8601a4.slice/crio-add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad WatchSource:0}: Error finding container add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad: Status 404 returned error can't find the container with id add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.317023 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vjxc7"] Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.905222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vjxc7" event={"ID":"8710a812-40f8-4a31-ac00-99eebd8601a4","Type":"ContainerStarted","Data":"63cfc8310694fae8a67d0927f3abb52f73bbeca623136d9dda1e0c74522bd5e6"} Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.905562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vjxc7" event={"ID":"8710a812-40f8-4a31-ac00-99eebd8601a4","Type":"ContainerStarted","Data":"add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad"} Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.909202 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jvlmr" event={"ID":"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4","Type":"ContainerStarted","Data":"ee002433f78bbf5cdee3b5fdcbfa95ad2359ab94c9a3617972159ab8bb34f811"} Mar 10 19:09:11 crc kubenswrapper[4861]: I0310 19:09:11.987093 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jvlmr" podStartSLOduration=2.127151777 podStartE2EDuration="5.987070823s" podCreationTimestamp="2026-03-10 19:09:06 +0000 UTC" firstStartedPulling="2026-03-10 19:09:07.00158849 +0000 UTC m=+1290.765024450" lastFinishedPulling="2026-03-10 19:09:10.861507496 +0000 UTC m=+1294.624943496" observedRunningTime="2026-03-10 19:09:11.979385971 +0000 UTC m=+1295.742821961" watchObservedRunningTime="2026-03-10 19:09:11.987070823 +0000 UTC m=+1295.750506793" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.208288 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sl4sb"] Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.210314 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.224313 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sl4sb"] Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.317354 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-822e-account-create-update-rcx69"] Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.318278 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.323464 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.333488 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-822e-account-create-update-rcx69"] Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.359982 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z972\" (UniqueName: \"kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.360157 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.462281 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.462759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq622\" (UniqueName: \"kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.462833 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.462904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z972\" (UniqueName: \"kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.463746 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.495373 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z972\" (UniqueName: \"kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972\") pod \"glance-db-create-sl4sb\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.531521 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.564795 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq622\" (UniqueName: \"kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.564854 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.565656 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.586259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq622\" (UniqueName: \"kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622\") pod \"glance-822e-account-create-update-rcx69\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.633596 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.925136 4861 generic.go:334] "Generic (PLEG): container finished" podID="8710a812-40f8-4a31-ac00-99eebd8601a4" containerID="63cfc8310694fae8a67d0927f3abb52f73bbeca623136d9dda1e0c74522bd5e6" exitCode=0 Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.927252 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vjxc7" event={"ID":"8710a812-40f8-4a31-ac00-99eebd8601a4","Type":"ContainerDied","Data":"63cfc8310694fae8a67d0927f3abb52f73bbeca623136d9dda1e0c74522bd5e6"} Mar 10 19:09:12 crc kubenswrapper[4861]: I0310 19:09:12.929866 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-822e-account-create-update-rcx69"] Mar 10 19:09:12 crc kubenswrapper[4861]: W0310 19:09:12.940072 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ec2818_0613_4fd1_8373_8c06c0b24489.slice/crio-79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a WatchSource:0}: Error finding container 79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a: Status 404 returned error can't find the container with id 79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.006948 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6l9h7"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.008419 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.013726 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6l9h7"] Mar 10 19:09:13 crc kubenswrapper[4861]: W0310 19:09:13.069874 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0377327_c5da_4e36_b9a3_462513bbd9d2.slice/crio-22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435 WatchSource:0}: Error finding container 22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435: Status 404 returned error can't find the container with id 22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435 Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.075822 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sl4sb"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.116545 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6638-account-create-update-kc2pp"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.117543 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.119787 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.129375 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6638-account-create-update-kc2pp"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.177429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.177614 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkmn\" (UniqueName: \"kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.220635 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-frs8p"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.221936 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.239454 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-frs8p"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.278436 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.278507 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.278556 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jthz\" (UniqueName: \"kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.278629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dkmn\" (UniqueName: \"kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.281434 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.298017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dkmn\" (UniqueName: \"kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn\") pod \"keystone-db-create-6l9h7\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.314428 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a4bb-account-create-update-dblgj"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.315340 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.317311 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.321611 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a4bb-account-create-update-dblgj"] Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.380054 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msw9\" (UniqueName: \"kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.380168 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.380227 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jthz\" (UniqueName: \"kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.380252 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.381283 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.396367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jthz\" (UniqueName: \"kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz\") pod \"keystone-6638-account-create-update-kc2pp\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.427876 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.444262 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.481215 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.481264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msw9\" (UniqueName: \"kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.481545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:13 crc kubenswrapper[4861]: E0310 19:09:13.481628 4861 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 10 19:09:13 crc kubenswrapper[4861]: E0310 19:09:13.481643 4861 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.481675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: E0310 19:09:13.481874 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift podName:04bbfc10-7f55-45a5-8a53-70e994a09bc9 nodeName:}" failed. No retries permitted until 2026-03-10 19:09:21.481829905 +0000 UTC m=+1305.245265885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift") pod "swift-storage-0" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9") : configmap "swift-ring-files" not found Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.482399 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqx4h\" (UniqueName: \"kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.482601 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.488993 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.518378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msw9\" (UniqueName: \"kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9\") pod \"placement-db-create-frs8p\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.537252 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-frs8p" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.584079 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqx4h\" (UniqueName: \"kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.585782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.586676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.611461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqx4h\" (UniqueName: \"kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h\") pod \"placement-a4bb-account-create-update-dblgj\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:13 crc kubenswrapper[4861]: I0310 19:09:13.643907 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.934331 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4sb" event={"ID":"a0377327-c5da-4e36-b9a3-462513bbd9d2","Type":"ContainerStarted","Data":"b9c33f292294bee8955347aea33194f1e9cfcaee85d9226a6d22f5e1cc7c48e1"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.934369 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4sb" event={"ID":"a0377327-c5da-4e36-b9a3-462513bbd9d2","Type":"ContainerStarted","Data":"22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.938323 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-822e-account-create-update-rcx69" event={"ID":"a9ec2818-0613-4fd1-8373-8c06c0b24489","Type":"ContainerStarted","Data":"979a346e65c8ce0258d18338e42cfc894cd1f93119f4e9b0471bb23704ff0183"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.938400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-822e-account-create-update-rcx69" event={"ID":"a9ec2818-0613-4fd1-8373-8c06c0b24489","Type":"ContainerStarted","Data":"79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.956384 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-sl4sb" podStartSLOduration=1.9563683649999999 podStartE2EDuration="1.956368365s" podCreationTimestamp="2026-03-10 19:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:13.950889044 +0000 UTC m=+1297.714325004" watchObservedRunningTime="2026-03-10 19:09:13.956368365 +0000 UTC m=+1297.719804325" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.965638 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-822e-account-create-update-rcx69" podStartSLOduration=1.965618093 podStartE2EDuration="1.965618093s" podCreationTimestamp="2026-03-10 19:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:13.963086023 +0000 UTC m=+1297.726521983" watchObservedRunningTime="2026-03-10 19:09:13.965618093 +0000 UTC m=+1297.729054053" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:13.993426 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6638-account-create-update-kc2pp"] Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.006660 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6l9h7"] Mar 10 19:09:15 crc kubenswrapper[4861]: W0310 19:09:14.035526 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec186926_88f6_4c2f_b44d_e44d62d9d02d.slice/crio-c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f WatchSource:0}: Error finding container c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f: Status 404 returned error can't find the container with id c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.746940 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.825013 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.825267 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="dnsmasq-dns" containerID="cri-o://5ee7bbb3af9d004520d5286c816a8234f0eb60c0e151bd1f7661638abcb2d8da" gracePeriod=10 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.957636 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6638-account-create-update-kc2pp" event={"ID":"7fc483f5-15c5-4a67-b7e1-adab3b97cec7","Type":"ContainerStarted","Data":"d0f2796a31ad17b58880cd786e5deccbce67bb62aee3d065d561b2ecb1de5145"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.957734 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6638-account-create-update-kc2pp" event={"ID":"7fc483f5-15c5-4a67-b7e1-adab3b97cec7","Type":"ContainerStarted","Data":"23756c088332f877fb00dddd5782fc3f31b3310972620905ded7dda7efa9e01e"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.974146 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6638-account-create-update-kc2pp" podStartSLOduration=1.974109347 podStartE2EDuration="1.974109347s" podCreationTimestamp="2026-03-10 19:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:14.973844619 +0000 UTC m=+1298.737280589" watchObservedRunningTime="2026-03-10 19:09:14.974109347 +0000 UTC m=+1298.737545317" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.975594 4861 generic.go:334] "Generic (PLEG): container finished" podID="a9ec2818-0613-4fd1-8373-8c06c0b24489" containerID="979a346e65c8ce0258d18338e42cfc894cd1f93119f4e9b0471bb23704ff0183" exitCode=0 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.978103 4861 generic.go:334] "Generic (PLEG): container finished" podID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerID="5ee7bbb3af9d004520d5286c816a8234f0eb60c0e151bd1f7661638abcb2d8da" exitCode=0 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.979599 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-822e-account-create-update-rcx69" event={"ID":"a9ec2818-0613-4fd1-8373-8c06c0b24489","Type":"ContainerDied","Data":"979a346e65c8ce0258d18338e42cfc894cd1f93119f4e9b0471bb23704ff0183"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.979643 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" event={"ID":"b33ea429-fdf5-482e-99b6-b4f7d0103988","Type":"ContainerDied","Data":"5ee7bbb3af9d004520d5286c816a8234f0eb60c0e151bd1f7661638abcb2d8da"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.981243 4861 generic.go:334] "Generic (PLEG): container finished" podID="a0377327-c5da-4e36-b9a3-462513bbd9d2" containerID="b9c33f292294bee8955347aea33194f1e9cfcaee85d9226a6d22f5e1cc7c48e1" exitCode=0 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.981320 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4sb" event={"ID":"a0377327-c5da-4e36-b9a3-462513bbd9d2","Type":"ContainerDied","Data":"b9c33f292294bee8955347aea33194f1e9cfcaee85d9226a6d22f5e1cc7c48e1"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.983249 4861 generic.go:334] "Generic (PLEG): container finished" podID="ec186926-88f6-4c2f-b44d-e44d62d9d02d" containerID="71ebac9dc4f5ed0bb97bc92230d256314e35d04d7a598c26018aa0d732b96161" exitCode=0 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.983277 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6l9h7" event={"ID":"ec186926-88f6-4c2f-b44d-e44d62d9d02d","Type":"ContainerDied","Data":"71ebac9dc4f5ed0bb97bc92230d256314e35d04d7a598c26018aa0d732b96161"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:14.983294 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6l9h7" event={"ID":"ec186926-88f6-4c2f-b44d-e44d62d9d02d","Type":"ContainerStarted","Data":"c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f"} Mar 10 19:09:15 crc kubenswrapper[4861]: E0310 19:09:15.217733 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc483f5_15c5_4a67_b7e1_adab3b97cec7.slice/crio-d0f2796a31ad17b58880cd786e5deccbce67bb62aee3d065d561b2ecb1de5145.scope\": RecentStats: unable to find data in memory cache]" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.736835 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.769447 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a4bb-account-create-update-dblgj"] Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.776060 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.798083 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-frs8p"] Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845351 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w7nl\" (UniqueName: \"kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl\") pod \"b33ea429-fdf5-482e-99b6-b4f7d0103988\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845431 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts\") pod \"8710a812-40f8-4a31-ac00-99eebd8601a4\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845487 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb\") pod \"b33ea429-fdf5-482e-99b6-b4f7d0103988\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrtl\" (UniqueName: \"kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl\") pod \"8710a812-40f8-4a31-ac00-99eebd8601a4\" (UID: \"8710a812-40f8-4a31-ac00-99eebd8601a4\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845625 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb\") pod \"b33ea429-fdf5-482e-99b6-b4f7d0103988\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc\") pod \"b33ea429-fdf5-482e-99b6-b4f7d0103988\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.845747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config\") pod \"b33ea429-fdf5-482e-99b6-b4f7d0103988\" (UID: \"b33ea429-fdf5-482e-99b6-b4f7d0103988\") " Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.850729 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl" (OuterVolumeSpecName: "kube-api-access-6w7nl") pod "b33ea429-fdf5-482e-99b6-b4f7d0103988" (UID: "b33ea429-fdf5-482e-99b6-b4f7d0103988"). InnerVolumeSpecName "kube-api-access-6w7nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.851160 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8710a812-40f8-4a31-ac00-99eebd8601a4" (UID: "8710a812-40f8-4a31-ac00-99eebd8601a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.852399 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl" (OuterVolumeSpecName: "kube-api-access-frrtl") pod "8710a812-40f8-4a31-ac00-99eebd8601a4" (UID: "8710a812-40f8-4a31-ac00-99eebd8601a4"). InnerVolumeSpecName "kube-api-access-frrtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.887797 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config" (OuterVolumeSpecName: "config") pod "b33ea429-fdf5-482e-99b6-b4f7d0103988" (UID: "b33ea429-fdf5-482e-99b6-b4f7d0103988"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.895352 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b33ea429-fdf5-482e-99b6-b4f7d0103988" (UID: "b33ea429-fdf5-482e-99b6-b4f7d0103988"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.900077 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b33ea429-fdf5-482e-99b6-b4f7d0103988" (UID: "b33ea429-fdf5-482e-99b6-b4f7d0103988"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.901634 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b33ea429-fdf5-482e-99b6-b4f7d0103988" (UID: "b33ea429-fdf5-482e-99b6-b4f7d0103988"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.947904 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948067 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w7nl\" (UniqueName: \"kubernetes.io/projected/b33ea429-fdf5-482e-99b6-b4f7d0103988-kube-api-access-6w7nl\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948134 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8710a812-40f8-4a31-ac00-99eebd8601a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948226 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948275 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrtl\" (UniqueName: \"kubernetes.io/projected/8710a812-40f8-4a31-ac00-99eebd8601a4-kube-api-access-frrtl\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948321 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.948374 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b33ea429-fdf5-482e-99b6-b4f7d0103988-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.990744 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-frs8p" event={"ID":"37e93e0a-bc82-45fa-a340-a7eb189f2657","Type":"ContainerStarted","Data":"bc9707b6901732303cb7926eee46191715b22462440f7e5d13b04d9979f6ad15"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.990781 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-frs8p" event={"ID":"37e93e0a-bc82-45fa-a340-a7eb189f2657","Type":"ContainerStarted","Data":"02d8c64e0a0071007ef7c6f50a7d05efa09d53f6453677b77440e8dc3e585b1b"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.993816 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vjxc7" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.993849 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vjxc7" event={"ID":"8710a812-40f8-4a31-ac00-99eebd8601a4","Type":"ContainerDied","Data":"add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.993884 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="add66bb6bcd53339e05fe4ed5405bff2c9e505bc08be05c7f9f8ad274ed22dad" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.995310 4861 generic.go:334] "Generic (PLEG): container finished" podID="7fc483f5-15c5-4a67-b7e1-adab3b97cec7" containerID="d0f2796a31ad17b58880cd786e5deccbce67bb62aee3d065d561b2ecb1de5145" exitCode=0 Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.995366 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6638-account-create-update-kc2pp" event={"ID":"7fc483f5-15c5-4a67-b7e1-adab3b97cec7","Type":"ContainerDied","Data":"d0f2796a31ad17b58880cd786e5deccbce67bb62aee3d065d561b2ecb1de5145"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.996278 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a4bb-account-create-update-dblgj" event={"ID":"ce1014a0-2a80-40e4-8e5c-ed810afd2320","Type":"ContainerStarted","Data":"b8ab7c40d9ea1021c05dabfee1dbe230b90ce92bb3c83b4fcb3e6a5f1f04b3dc"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.996300 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a4bb-account-create-update-dblgj" event={"ID":"ce1014a0-2a80-40e4-8e5c-ed810afd2320","Type":"ContainerStarted","Data":"0dc2b3ad5ce9285875876fa8144c829d35c96f767e2dd8aa347bcb2f1271c491"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.998075 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.998119 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-jd4f8" event={"ID":"b33ea429-fdf5-482e-99b6-b4f7d0103988","Type":"ContainerDied","Data":"ba2273cab8ab542d49b241913ebc44b41d6051c88397757b776990ec30276c5f"} Mar 10 19:09:15 crc kubenswrapper[4861]: I0310 19:09:15.998156 4861 scope.go:117] "RemoveContainer" containerID="5ee7bbb3af9d004520d5286c816a8234f0eb60c0e151bd1f7661638abcb2d8da" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.010911 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-frs8p" podStartSLOduration=3.010895807 podStartE2EDuration="3.010895807s" podCreationTimestamp="2026-03-10 19:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:16.004861139 +0000 UTC m=+1299.768297099" watchObservedRunningTime="2026-03-10 19:09:16.010895807 +0000 UTC m=+1299.774331757" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.033605 4861 scope.go:117] "RemoveContainer" containerID="975c505e713b68ae4a324f48e5d603ec69d77687651e04e9c35d417d5c0b831b" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.052809 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a4bb-account-create-update-dblgj" podStartSLOduration=3.052790121 podStartE2EDuration="3.052790121s" podCreationTimestamp="2026-03-10 19:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:16.040733756 +0000 UTC m=+1299.804169716" watchObservedRunningTime="2026-03-10 19:09:16.052790121 +0000 UTC m=+1299.816226221" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.067113 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.073537 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-jd4f8"] Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.261584 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.350082 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.356563 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts\") pod \"a9ec2818-0613-4fd1-8373-8c06c0b24489\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.356669 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq622\" (UniqueName: \"kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622\") pod \"a9ec2818-0613-4fd1-8373-8c06c0b24489\" (UID: \"a9ec2818-0613-4fd1-8373-8c06c0b24489\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.357125 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9ec2818-0613-4fd1-8373-8c06c0b24489" (UID: "a9ec2818-0613-4fd1-8373-8c06c0b24489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.358154 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.360240 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622" (OuterVolumeSpecName: "kube-api-access-dq622") pod "a9ec2818-0613-4fd1-8373-8c06c0b24489" (UID: "a9ec2818-0613-4fd1-8373-8c06c0b24489"). InnerVolumeSpecName "kube-api-access-dq622". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458249 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z972\" (UniqueName: \"kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972\") pod \"a0377327-c5da-4e36-b9a3-462513bbd9d2\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458369 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts\") pod \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts\") pod \"a0377327-c5da-4e36-b9a3-462513bbd9d2\" (UID: \"a0377327-c5da-4e36-b9a3-462513bbd9d2\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458556 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dkmn\" (UniqueName: \"kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn\") pod \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\" (UID: \"ec186926-88f6-4c2f-b44d-e44d62d9d02d\") " Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458919 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9ec2818-0613-4fd1-8373-8c06c0b24489-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.458932 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq622\" (UniqueName: \"kubernetes.io/projected/a9ec2818-0613-4fd1-8373-8c06c0b24489-kube-api-access-dq622\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.459189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0377327-c5da-4e36-b9a3-462513bbd9d2" (UID: "a0377327-c5da-4e36-b9a3-462513bbd9d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.459211 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec186926-88f6-4c2f-b44d-e44d62d9d02d" (UID: "ec186926-88f6-4c2f-b44d-e44d62d9d02d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.460695 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972" (OuterVolumeSpecName: "kube-api-access-8z972") pod "a0377327-c5da-4e36-b9a3-462513bbd9d2" (UID: "a0377327-c5da-4e36-b9a3-462513bbd9d2"). InnerVolumeSpecName "kube-api-access-8z972". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.465936 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn" (OuterVolumeSpecName: "kube-api-access-8dkmn") pod "ec186926-88f6-4c2f-b44d-e44d62d9d02d" (UID: "ec186926-88f6-4c2f-b44d-e44d62d9d02d"). InnerVolumeSpecName "kube-api-access-8dkmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.560453 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dkmn\" (UniqueName: \"kubernetes.io/projected/ec186926-88f6-4c2f-b44d-e44d62d9d02d-kube-api-access-8dkmn\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.560617 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z972\" (UniqueName: \"kubernetes.io/projected/a0377327-c5da-4e36-b9a3-462513bbd9d2-kube-api-access-8z972\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.560795 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec186926-88f6-4c2f-b44d-e44d62d9d02d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.560906 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0377327-c5da-4e36-b9a3-462513bbd9d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:16 crc kubenswrapper[4861]: I0310 19:09:16.977334 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" path="/var/lib/kubelet/pods/b33ea429-fdf5-482e-99b6-b4f7d0103988/volumes" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.022134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sl4sb" event={"ID":"a0377327-c5da-4e36-b9a3-462513bbd9d2","Type":"ContainerDied","Data":"22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435"} Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.022194 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22940bfae441aa646a62966b1eec2a6e2e66c5a79ec2745d7abf2baf0fc1f435" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.022266 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sl4sb" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.024217 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6l9h7" event={"ID":"ec186926-88f6-4c2f-b44d-e44d62d9d02d","Type":"ContainerDied","Data":"c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f"} Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.024246 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6l9h7" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.024253 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7661e5ae67884198abe442efb1976a4ffae5334059a46a0649ffd0783e3fb9f" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.026136 4861 generic.go:334] "Generic (PLEG): container finished" podID="37e93e0a-bc82-45fa-a340-a7eb189f2657" containerID="bc9707b6901732303cb7926eee46191715b22462440f7e5d13b04d9979f6ad15" exitCode=0 Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.026207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-frs8p" event={"ID":"37e93e0a-bc82-45fa-a340-a7eb189f2657","Type":"ContainerDied","Data":"bc9707b6901732303cb7926eee46191715b22462440f7e5d13b04d9979f6ad15"} Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.030013 4861 generic.go:334] "Generic (PLEG): container finished" podID="ce1014a0-2a80-40e4-8e5c-ed810afd2320" containerID="b8ab7c40d9ea1021c05dabfee1dbe230b90ce92bb3c83b4fcb3e6a5f1f04b3dc" exitCode=0 Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.030111 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a4bb-account-create-update-dblgj" event={"ID":"ce1014a0-2a80-40e4-8e5c-ed810afd2320","Type":"ContainerDied","Data":"b8ab7c40d9ea1021c05dabfee1dbe230b90ce92bb3c83b4fcb3e6a5f1f04b3dc"} Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.033097 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-822e-account-create-update-rcx69" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.033860 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-822e-account-create-update-rcx69" event={"ID":"a9ec2818-0613-4fd1-8373-8c06c0b24489","Type":"ContainerDied","Data":"79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a"} Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.033887 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cd664a82bd76a87be2c33e70ef527f122f245038c511c114300976d9b9fa8a" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.404253 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.475981 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts\") pod \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.476032 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jthz\" (UniqueName: \"kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz\") pod \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\" (UID: \"7fc483f5-15c5-4a67-b7e1-adab3b97cec7\") " Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.476619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fc483f5-15c5-4a67-b7e1-adab3b97cec7" (UID: "7fc483f5-15c5-4a67-b7e1-adab3b97cec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.493101 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz" (OuterVolumeSpecName: "kube-api-access-5jthz") pod "7fc483f5-15c5-4a67-b7e1-adab3b97cec7" (UID: "7fc483f5-15c5-4a67-b7e1-adab3b97cec7"). InnerVolumeSpecName "kube-api-access-5jthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551271 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qw5zr"] Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551683 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8710a812-40f8-4a31-ac00-99eebd8601a4" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551720 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8710a812-40f8-4a31-ac00-99eebd8601a4" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551737 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc483f5-15c5-4a67-b7e1-adab3b97cec7" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551760 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc483f5-15c5-4a67-b7e1-adab3b97cec7" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551780 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec186926-88f6-4c2f-b44d-e44d62d9d02d" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551789 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec186926-88f6-4c2f-b44d-e44d62d9d02d" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551816 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="init" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="init" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551844 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="dnsmasq-dns" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551852 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="dnsmasq-dns" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551872 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ec2818-0613-4fd1-8373-8c06c0b24489" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551880 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ec2818-0613-4fd1-8373-8c06c0b24489" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: E0310 19:09:17.551896 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0377327-c5da-4e36-b9a3-462513bbd9d2" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.551903 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0377327-c5da-4e36-b9a3-462513bbd9d2" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552083 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc483f5-15c5-4a67-b7e1-adab3b97cec7" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552103 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0377327-c5da-4e36-b9a3-462513bbd9d2" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552113 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8710a812-40f8-4a31-ac00-99eebd8601a4" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552123 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec186926-88f6-4c2f-b44d-e44d62d9d02d" containerName="mariadb-database-create" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552139 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ec2818-0613-4fd1-8373-8c06c0b24489" containerName="mariadb-account-create-update" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552152 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33ea429-fdf5-482e-99b6-b4f7d0103988" containerName="dnsmasq-dns" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.552784 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.557049 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.557236 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6lrgm" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.564842 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qw5zr"] Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.578697 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.578753 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jthz\" (UniqueName: \"kubernetes.io/projected/7fc483f5-15c5-4a67-b7e1-adab3b97cec7-kube-api-access-5jthz\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.679775 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.679840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.679870 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ck8\" (UniqueName: \"kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.679971 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.781807 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.782495 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.782539 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ck8\" (UniqueName: \"kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.782682 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.793688 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.794518 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.794833 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.812778 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ck8\" (UniqueName: \"kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8\") pod \"glance-db-sync-qw5zr\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:17 crc kubenswrapper[4861]: I0310 19:09:17.872735 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.042008 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6638-account-create-update-kc2pp" event={"ID":"7fc483f5-15c5-4a67-b7e1-adab3b97cec7","Type":"ContainerDied","Data":"23756c088332f877fb00dddd5782fc3f31b3310972620905ded7dda7efa9e01e"} Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.042262 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23756c088332f877fb00dddd5782fc3f31b3310972620905ded7dda7efa9e01e" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.042065 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-kc2pp" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.043341 4861 generic.go:334] "Generic (PLEG): container finished" podID="6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" containerID="ee002433f78bbf5cdee3b5fdcbfa95ad2359ab94c9a3617972159ab8bb34f811" exitCode=0 Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.043466 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jvlmr" event={"ID":"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4","Type":"ContainerDied","Data":"ee002433f78bbf5cdee3b5fdcbfa95ad2359ab94c9a3617972159ab8bb34f811"} Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.213894 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qw5zr"] Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.360885 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.404637 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-frs8p" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.495936 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msw9\" (UniqueName: \"kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9\") pod \"37e93e0a-bc82-45fa-a340-a7eb189f2657\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.496018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts\") pod \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.496115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqx4h\" (UniqueName: \"kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h\") pod \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\" (UID: \"ce1014a0-2a80-40e4-8e5c-ed810afd2320\") " Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.496172 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts\") pod \"37e93e0a-bc82-45fa-a340-a7eb189f2657\" (UID: \"37e93e0a-bc82-45fa-a340-a7eb189f2657\") " Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.496865 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37e93e0a-bc82-45fa-a340-a7eb189f2657" (UID: "37e93e0a-bc82-45fa-a340-a7eb189f2657"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.497015 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce1014a0-2a80-40e4-8e5c-ed810afd2320" (UID: "ce1014a0-2a80-40e4-8e5c-ed810afd2320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.500746 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9" (OuterVolumeSpecName: "kube-api-access-8msw9") pod "37e93e0a-bc82-45fa-a340-a7eb189f2657" (UID: "37e93e0a-bc82-45fa-a340-a7eb189f2657"). InnerVolumeSpecName "kube-api-access-8msw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.502208 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h" (OuterVolumeSpecName: "kube-api-access-cqx4h") pod "ce1014a0-2a80-40e4-8e5c-ed810afd2320" (UID: "ce1014a0-2a80-40e4-8e5c-ed810afd2320"). InnerVolumeSpecName "kube-api-access-cqx4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.598814 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msw9\" (UniqueName: \"kubernetes.io/projected/37e93e0a-bc82-45fa-a340-a7eb189f2657-kube-api-access-8msw9\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.598874 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1014a0-2a80-40e4-8e5c-ed810afd2320-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.598897 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqx4h\" (UniqueName: \"kubernetes.io/projected/ce1014a0-2a80-40e4-8e5c-ed810afd2320-kube-api-access-cqx4h\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:18 crc kubenswrapper[4861]: I0310 19:09:18.598917 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37e93e0a-bc82-45fa-a340-a7eb189f2657-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.025397 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vjxc7"] Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.036573 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vjxc7"] Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.045979 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5ss65"] Mar 10 19:09:19 crc kubenswrapper[4861]: E0310 19:09:19.046450 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e93e0a-bc82-45fa-a340-a7eb189f2657" containerName="mariadb-database-create" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.046477 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e93e0a-bc82-45fa-a340-a7eb189f2657" containerName="mariadb-database-create" Mar 10 19:09:19 crc kubenswrapper[4861]: E0310 19:09:19.046515 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1014a0-2a80-40e4-8e5c-ed810afd2320" containerName="mariadb-account-create-update" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.046524 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1014a0-2a80-40e4-8e5c-ed810afd2320" containerName="mariadb-account-create-update" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.046741 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e93e0a-bc82-45fa-a340-a7eb189f2657" containerName="mariadb-database-create" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.046769 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1014a0-2a80-40e4-8e5c-ed810afd2320" containerName="mariadb-account-create-update" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.047478 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.049647 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.054976 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5ss65"] Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.064551 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a4bb-account-create-update-dblgj" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.064544 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a4bb-account-create-update-dblgj" event={"ID":"ce1014a0-2a80-40e4-8e5c-ed810afd2320","Type":"ContainerDied","Data":"0dc2b3ad5ce9285875876fa8144c829d35c96f767e2dd8aa347bcb2f1271c491"} Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.064727 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc2b3ad5ce9285875876fa8144c829d35c96f767e2dd8aa347bcb2f1271c491" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.066653 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qw5zr" event={"ID":"f3d9436f-f09c-45b4-945a-4b5f372724d6","Type":"ContainerStarted","Data":"2a07d0a9092d8cc5f27010e43735111b8d21b4408ad88f4053292628ef656ecc"} Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.068337 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-frs8p" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.068699 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-frs8p" event={"ID":"37e93e0a-bc82-45fa-a340-a7eb189f2657","Type":"ContainerDied","Data":"02d8c64e0a0071007ef7c6f50a7d05efa09d53f6453677b77440e8dc3e585b1b"} Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.068742 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d8c64e0a0071007ef7c6f50a7d05efa09d53f6453677b77440e8dc3e585b1b" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.107191 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.107253 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5v5f\" (UniqueName: \"kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.208990 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5v5f\" (UniqueName: \"kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.209439 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.215338 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.235304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5v5f\" (UniqueName: \"kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f\") pod \"root-account-create-update-5ss65\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.391534 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.411585 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.514957 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515221 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515257 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515328 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515446 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qvd\" (UniqueName: \"kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.515461 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts\") pod \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\" (UID: \"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4\") " Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.516010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.516438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.520462 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd" (OuterVolumeSpecName: "kube-api-access-n7qvd") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "kube-api-access-n7qvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.523058 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.536979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.541998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.551908 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts" (OuterVolumeSpecName: "scripts") pod "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" (UID: "6f6cb862-e812-41b0-bf94-1c3b5ebb51a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622280 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622315 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qvd\" (UniqueName: \"kubernetes.io/projected/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-kube-api-access-n7qvd\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622327 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622335 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622443 4861 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622453 4861 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.622461 4861 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:19 crc kubenswrapper[4861]: I0310 19:09:19.627674 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5ss65"] Mar 10 19:09:19 crc kubenswrapper[4861]: W0310 19:09:19.634745 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44eba218_b623_44c9_9254_70f10e3b685a.slice/crio-98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650 WatchSource:0}: Error finding container 98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650: Status 404 returned error can't find the container with id 98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650 Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.083496 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ss65" event={"ID":"44eba218-b623-44c9-9254-70f10e3b685a","Type":"ContainerStarted","Data":"a975e4595b3b7dd38d3d3df2c54181679f5a2f3131f2fe87bf6bb8bc23d88904"} Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.084031 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ss65" event={"ID":"44eba218-b623-44c9-9254-70f10e3b685a","Type":"ContainerStarted","Data":"98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650"} Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.086927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jvlmr" event={"ID":"6f6cb862-e812-41b0-bf94-1c3b5ebb51a4","Type":"ContainerDied","Data":"0be2c4e6acaa5639fd83bdbeee71d5f3993f89f61d8f39176c09fc5082400c15"} Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.086972 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be2c4e6acaa5639fd83bdbeee71d5f3993f89f61d8f39176c09fc5082400c15" Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.087069 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jvlmr" Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.134311 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5ss65" podStartSLOduration=1.134285427 podStartE2EDuration="1.134285427s" podCreationTimestamp="2026-03-10 19:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:20.12036309 +0000 UTC m=+1303.883799050" watchObservedRunningTime="2026-03-10 19:09:20.134285427 +0000 UTC m=+1303.897721407" Mar 10 19:09:20 crc kubenswrapper[4861]: I0310 19:09:20.975675 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8710a812-40f8-4a31-ac00-99eebd8601a4" path="/var/lib/kubelet/pods/8710a812-40f8-4a31-ac00-99eebd8601a4/volumes" Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.102968 4861 generic.go:334] "Generic (PLEG): container finished" podID="44eba218-b623-44c9-9254-70f10e3b685a" containerID="a975e4595b3b7dd38d3d3df2c54181679f5a2f3131f2fe87bf6bb8bc23d88904" exitCode=0 Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.103027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ss65" event={"ID":"44eba218-b623-44c9-9254-70f10e3b685a","Type":"ContainerDied","Data":"a975e4595b3b7dd38d3d3df2c54181679f5a2f3131f2fe87bf6bb8bc23d88904"} Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.558097 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.572839 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"swift-storage-0\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " pod="openstack/swift-storage-0" Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.808207 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.992081 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:09:21 crc kubenswrapper[4861]: I0310 19:09:21.992658 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.380643 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:09:22 crc kubenswrapper[4861]: W0310 19:09:22.390247 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04bbfc10_7f55_45a5_8a53_70e994a09bc9.slice/crio-d51f9d05fd5eeb22e0ba24dfab305bb12907b0352462df401de8b75db408e3c4 WatchSource:0}: Error finding container d51f9d05fd5eeb22e0ba24dfab305bb12907b0352462df401de8b75db408e3c4: Status 404 returned error can't find the container with id d51f9d05fd5eeb22e0ba24dfab305bb12907b0352462df401de8b75db408e3c4 Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.447415 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.577442 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5v5f\" (UniqueName: \"kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f\") pod \"44eba218-b623-44c9-9254-70f10e3b685a\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.577512 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts\") pod \"44eba218-b623-44c9-9254-70f10e3b685a\" (UID: \"44eba218-b623-44c9-9254-70f10e3b685a\") " Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.578509 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44eba218-b623-44c9-9254-70f10e3b685a" (UID: "44eba218-b623-44c9-9254-70f10e3b685a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.585525 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f" (OuterVolumeSpecName: "kube-api-access-r5v5f") pod "44eba218-b623-44c9-9254-70f10e3b685a" (UID: "44eba218-b623-44c9-9254-70f10e3b685a"). InnerVolumeSpecName "kube-api-access-r5v5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.679512 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5v5f\" (UniqueName: \"kubernetes.io/projected/44eba218-b623-44c9-9254-70f10e3b685a-kube-api-access-r5v5f\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:22 crc kubenswrapper[4861]: I0310 19:09:22.679546 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44eba218-b623-44c9-9254-70f10e3b685a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:23 crc kubenswrapper[4861]: I0310 19:09:23.125823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ss65" event={"ID":"44eba218-b623-44c9-9254-70f10e3b685a","Type":"ContainerDied","Data":"98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650"} Mar 10 19:09:23 crc kubenswrapper[4861]: I0310 19:09:23.126146 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98169c5370fd5689f46fc07869ff2d0a4395f5272bd515dd29e08a07bf783650" Mar 10 19:09:23 crc kubenswrapper[4861]: I0310 19:09:23.125880 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ss65" Mar 10 19:09:23 crc kubenswrapper[4861]: I0310 19:09:23.128914 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"d51f9d05fd5eeb22e0ba24dfab305bb12907b0352462df401de8b75db408e3c4"} Mar 10 19:09:23 crc kubenswrapper[4861]: I0310 19:09:23.608032 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 10 19:09:25 crc kubenswrapper[4861]: I0310 19:09:25.512206 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5ss65"] Mar 10 19:09:25 crc kubenswrapper[4861]: I0310 19:09:25.519851 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5ss65"] Mar 10 19:09:26 crc kubenswrapper[4861]: I0310 19:09:26.614700 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zvlgw" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" probeResult="failure" output=< Mar 10 19:09:26 crc kubenswrapper[4861]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 10 19:09:26 crc kubenswrapper[4861]: > Mar 10 19:09:26 crc kubenswrapper[4861]: I0310 19:09:26.974300 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44eba218-b623-44c9-9254-70f10e3b685a" path="/var/lib/kubelet/pods/44eba218-b623-44c9-9254-70f10e3b685a/volumes" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.534339 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m956k"] Mar 10 19:09:30 crc kubenswrapper[4861]: E0310 19:09:30.535308 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44eba218-b623-44c9-9254-70f10e3b685a" containerName="mariadb-account-create-update" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.535333 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="44eba218-b623-44c9-9254-70f10e3b685a" containerName="mariadb-account-create-update" Mar 10 19:09:30 crc kubenswrapper[4861]: E0310 19:09:30.535366 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" containerName="swift-ring-rebalance" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.535378 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" containerName="swift-ring-rebalance" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.535589 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="44eba218-b623-44c9-9254-70f10e3b685a" containerName="mariadb-account-create-update" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.535622 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" containerName="swift-ring-rebalance" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.536297 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.538970 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.542520 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m956k"] Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.666024 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29z4r\" (UniqueName: \"kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.666084 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.768059 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29z4r\" (UniqueName: \"kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.768142 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.769250 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.802460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29z4r\" (UniqueName: \"kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r\") pod \"root-account-create-update-m956k\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " pod="openstack/root-account-create-update-m956k" Mar 10 19:09:30 crc kubenswrapper[4861]: I0310 19:09:30.870569 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m956k" Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.629996 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zvlgw" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" probeResult="failure" output=< Mar 10 19:09:31 crc kubenswrapper[4861]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 10 19:09:31 crc kubenswrapper[4861]: > Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.643004 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.689893 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.907845 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zvlgw-config-qpqwl"] Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.909621 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.915097 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 10 19:09:31 crc kubenswrapper[4861]: I0310 19:09:31.928346 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvlgw-config-qpqwl"] Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.089981 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.090065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.090098 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.090121 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.090178 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjkf\" (UniqueName: \"kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.090230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192029 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192190 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192261 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192435 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjkf\" (UniqueName: \"kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192534 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.192625 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.193212 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.193671 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.194558 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.229428 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjkf\" (UniqueName: \"kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf\") pod \"ovn-controller-zvlgw-config-qpqwl\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:32 crc kubenswrapper[4861]: I0310 19:09:32.257312 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:33 crc kubenswrapper[4861]: E0310 19:09:33.805444 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645" Mar 10 19:09:33 crc kubenswrapper[4861]: E0310 19:09:33.805783 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:account-server,Image:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645,Command:[/usr/bin/swift-account-server /etc/swift/account-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:account,HostPort:0,ContainerPort:6202,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbh578h5b6h79h557hcfh586h594h67fh5fbh5b9h587h67bh698h5f8h58hdch5f9h5dfh5fch668h66bh565h646h56dh565h87hfdh5f6h5f9h589hc9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mz6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(04bbfc10-7f55-45a5-8a53-70e994a09bc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:09:34 crc kubenswrapper[4861]: I0310 19:09:34.395288 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvlgw-config-qpqwl"] Mar 10 19:09:34 crc kubenswrapper[4861]: W0310 19:09:34.407335 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45966ce2_f555_418c_bcdd_005388bb0938.slice/crio-f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366 WatchSource:0}: Error finding container f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366: Status 404 returned error can't find the container with id f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366 Mar 10 19:09:34 crc kubenswrapper[4861]: I0310 19:09:34.445973 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m956k"] Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.261223 4861 generic.go:334] "Generic (PLEG): container finished" podID="da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" containerID="2db2d992b0ae0a470655ee33ae92a70c36f104d929da6f63364353103bc2cf93" exitCode=0 Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.261441 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m956k" event={"ID":"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9","Type":"ContainerDied","Data":"2db2d992b0ae0a470655ee33ae92a70c36f104d929da6f63364353103bc2cf93"} Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.261678 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m956k" event={"ID":"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9","Type":"ContainerStarted","Data":"98ab702c33bca93bac281e65d6c35d43ab169e0ce6a0152a785416c9cefe3b61"} Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.264377 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qw5zr" event={"ID":"f3d9436f-f09c-45b4-945a-4b5f372724d6","Type":"ContainerStarted","Data":"8489d2057b4b67b210b16c9a8105ade751f1d08e6e38f44ca2aeeea6b87c0618"} Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.266321 4861 generic.go:334] "Generic (PLEG): container finished" podID="45966ce2-f555-418c-bcdd-005388bb0938" containerID="f45dc011d5769ebd55f59d263575ef76c5b9b2f8cb40538674f36bf984d56643" exitCode=0 Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.266373 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw-config-qpqwl" event={"ID":"45966ce2-f555-418c-bcdd-005388bb0938","Type":"ContainerDied","Data":"f45dc011d5769ebd55f59d263575ef76c5b9b2f8cb40538674f36bf984d56643"} Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.266389 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw-config-qpqwl" event={"ID":"45966ce2-f555-418c-bcdd-005388bb0938","Type":"ContainerStarted","Data":"f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366"} Mar 10 19:09:35 crc kubenswrapper[4861]: I0310 19:09:35.328700 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qw5zr" podStartSLOduration=2.657757211 podStartE2EDuration="18.328681594s" podCreationTimestamp="2026-03-10 19:09:17 +0000 UTC" firstStartedPulling="2026-03-10 19:09:18.227465272 +0000 UTC m=+1301.990901232" lastFinishedPulling="2026-03-10 19:09:33.898389635 +0000 UTC m=+1317.661825615" observedRunningTime="2026-03-10 19:09:35.301176059 +0000 UTC m=+1319.064612029" watchObservedRunningTime="2026-03-10 19:09:35.328681594 +0000 UTC m=+1319.092117564" Mar 10 19:09:36 crc kubenswrapper[4861]: I0310 19:09:36.277478 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"ab8c0e84aed350a4ab30debf28ea1e963b8e3e21aaeb08b63288cae676be3729"} Mar 10 19:09:36 crc kubenswrapper[4861]: I0310 19:09:36.277794 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"97b623396baf8ceeff4e0ed2a9e396771f9bf11b23a843a8a998619ba053f542"} Mar 10 19:09:36 crc kubenswrapper[4861]: I0310 19:09:36.277810 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"d2ff3b6b89cfd7c1dc305fb22ed53d7ed15f44f949992d564c643552f9f8274d"} Mar 10 19:09:36 crc kubenswrapper[4861]: I0310 19:09:36.277823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"b3ade23dd33552771452468d0c1f332e38b2c8795245f8a42e5b5c4e2ed70338"} Mar 10 19:09:36 crc kubenswrapper[4861]: I0310 19:09:36.616494 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zvlgw" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.664826 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.671083 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m956k" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701029 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701149 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701177 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701254 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjkf\" (UniqueName: \"kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701281 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701323 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts\") pod \"45966ce2-f555-418c-bcdd-005388bb0938\" (UID: \"45966ce2-f555-418c-bcdd-005388bb0938\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701388 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29z4r\" (UniqueName: \"kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r\") pod \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701260 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run" (OuterVolumeSpecName: "var-run") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.701436 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts\") pod \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\" (UID: \"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9\") " Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702192 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702369 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" (UID: "da07cfa9-9a8f-4a60-8aa5-ceba369b81d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702510 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702582 4861 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702612 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702632 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702651 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45966ce2-f555-418c-bcdd-005388bb0938-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.702885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts" (OuterVolumeSpecName: "scripts") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.718942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf" (OuterVolumeSpecName: "kube-api-access-wxjkf") pod "45966ce2-f555-418c-bcdd-005388bb0938" (UID: "45966ce2-f555-418c-bcdd-005388bb0938"). InnerVolumeSpecName "kube-api-access-wxjkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.739043 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r" (OuterVolumeSpecName: "kube-api-access-29z4r") pod "da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" (UID: "da07cfa9-9a8f-4a60-8aa5-ceba369b81d9"). InnerVolumeSpecName "kube-api-access-29z4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.812715 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjkf\" (UniqueName: \"kubernetes.io/projected/45966ce2-f555-418c-bcdd-005388bb0938-kube-api-access-wxjkf\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.812762 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45966ce2-f555-418c-bcdd-005388bb0938-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:37 crc kubenswrapper[4861]: I0310 19:09:37.812775 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29z4r\" (UniqueName: \"kubernetes.io/projected/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9-kube-api-access-29z4r\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.307290 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw-config-qpqwl" Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.312189 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw-config-qpqwl" event={"ID":"45966ce2-f555-418c-bcdd-005388bb0938","Type":"ContainerDied","Data":"f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366"} Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.312243 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19d0a5a4656175a49f333326bc8ed13ea849b52395ac6a138cb3aa9e799e366" Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.315453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m956k" event={"ID":"da07cfa9-9a8f-4a60-8aa5-ceba369b81d9","Type":"ContainerDied","Data":"98ab702c33bca93bac281e65d6c35d43ab169e0ce6a0152a785416c9cefe3b61"} Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.315482 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ab702c33bca93bac281e65d6c35d43ab169e0ce6a0152a785416c9cefe3b61" Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.315575 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m956k" Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.787925 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zvlgw-config-qpqwl"] Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.793366 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zvlgw-config-qpqwl"] Mar 10 19:09:38 crc kubenswrapper[4861]: I0310 19:09:38.971275 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45966ce2-f555-418c-bcdd-005388bb0938" path="/var/lib/kubelet/pods/45966ce2-f555-418c-bcdd-005388bb0938/volumes" Mar 10 19:09:39 crc kubenswrapper[4861]: I0310 19:09:39.326570 4861 generic.go:334] "Generic (PLEG): container finished" podID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerID="6d99d0616ecc46c18b64d18128318acd7044610e3f4e749ae04d9ed4478404a9" exitCode=0 Mar 10 19:09:39 crc kubenswrapper[4861]: I0310 19:09:39.326633 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerDied","Data":"6d99d0616ecc46c18b64d18128318acd7044610e3f4e749ae04d9ed4478404a9"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.336787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerStarted","Data":"32235165193d6a0485898ee070ca8f7382c523ae5f2c706e831442bec3b9e031"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.337578 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.338894 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerID="95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899" exitCode=0 Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.338954 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerDied","Data":"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"73f6971b51316dd9e12239ff145daa5bc5b88b45caa9a90726d3eeb744b9fcb6"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354429 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"bef7cce63ec465afa0014e811d86aceb3dd6ecfcc6a1a0b0c73a257f27213042"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"66a2a9c7ab44445d4eeb595e1526f88c8cdbc26a0ff3fdd9ff021d8d32a4a982"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354456 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"f9e781e035bf250fe3ee13abaf159d7c55149ddf165cabef696cdc1f4ec625ad"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354466 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"5da060e6ca296ed684fc7e74fb2eb9b5f8999f47393abd75eb45df33aa0e5f1d"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.354478 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"43df18a6de05237d4ac2005dc34fa3948f76b3a87364c4a8cea9cbd0359fd444"} Mar 10 19:09:40 crc kubenswrapper[4861]: I0310 19:09:40.371034 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.577019611 podStartE2EDuration="1m13.371016148s" podCreationTimestamp="2026-03-10 19:08:27 +0000 UTC" firstStartedPulling="2026-03-10 19:08:29.663368368 +0000 UTC m=+1253.426804328" lastFinishedPulling="2026-03-10 19:09:05.457364895 +0000 UTC m=+1289.220800865" observedRunningTime="2026-03-10 19:09:40.363301664 +0000 UTC m=+1324.126737634" watchObservedRunningTime="2026-03-10 19:09:40.371016148 +0000 UTC m=+1324.134452108" Mar 10 19:09:40 crc kubenswrapper[4861]: E0310 19:09:40.449255 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\"]" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" Mar 10 19:09:41 crc kubenswrapper[4861]: I0310 19:09:41.364670 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"2bd4ad2d926f4ab721bec00675970f1a24d0671a8e5f1570ec65b6917857aedb"} Mar 10 19:09:41 crc kubenswrapper[4861]: I0310 19:09:41.367497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerStarted","Data":"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac"} Mar 10 19:09:41 crc kubenswrapper[4861]: I0310 19:09:41.367833 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:09:41 crc kubenswrapper[4861]: E0310 19:09:41.368132 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\"]" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" Mar 10 19:09:41 crc kubenswrapper[4861]: I0310 19:09:41.463105 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.341667926 podStartE2EDuration="1m14.463086144s" podCreationTimestamp="2026-03-10 19:08:27 +0000 UTC" firstStartedPulling="2026-03-10 19:08:29.36342354 +0000 UTC m=+1253.126859500" lastFinishedPulling="2026-03-10 19:09:05.484841748 +0000 UTC m=+1289.248277718" observedRunningTime="2026-03-10 19:09:41.458869468 +0000 UTC m=+1325.222305548" watchObservedRunningTime="2026-03-10 19:09:41.463086144 +0000 UTC m=+1325.226522114" Mar 10 19:09:42 crc kubenswrapper[4861]: E0310 19:09:42.384852 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account@sha256:6b75226d63980ff4a0dd49f490031ca563324b792940a9e453c9e3bd34456645\\\"\"]" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" Mar 10 19:09:44 crc kubenswrapper[4861]: I0310 19:09:44.398862 4861 generic.go:334] "Generic (PLEG): container finished" podID="f3d9436f-f09c-45b4-945a-4b5f372724d6" containerID="8489d2057b4b67b210b16c9a8105ade751f1d08e6e38f44ca2aeeea6b87c0618" exitCode=0 Mar 10 19:09:44 crc kubenswrapper[4861]: I0310 19:09:44.398929 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qw5zr" event={"ID":"f3d9436f-f09c-45b4-945a-4b5f372724d6","Type":"ContainerDied","Data":"8489d2057b4b67b210b16c9a8105ade751f1d08e6e38f44ca2aeeea6b87c0618"} Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.863851 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.956551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle\") pod \"f3d9436f-f09c-45b4-945a-4b5f372724d6\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.957645 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data\") pod \"f3d9436f-f09c-45b4-945a-4b5f372724d6\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.957797 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4ck8\" (UniqueName: \"kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8\") pod \"f3d9436f-f09c-45b4-945a-4b5f372724d6\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.957912 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data\") pod \"f3d9436f-f09c-45b4-945a-4b5f372724d6\" (UID: \"f3d9436f-f09c-45b4-945a-4b5f372724d6\") " Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.962909 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3d9436f-f09c-45b4-945a-4b5f372724d6" (UID: "f3d9436f-f09c-45b4-945a-4b5f372724d6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.972943 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8" (OuterVolumeSpecName: "kube-api-access-l4ck8") pod "f3d9436f-f09c-45b4-945a-4b5f372724d6" (UID: "f3d9436f-f09c-45b4-945a-4b5f372724d6"). InnerVolumeSpecName "kube-api-access-l4ck8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.981371 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d9436f-f09c-45b4-945a-4b5f372724d6" (UID: "f3d9436f-f09c-45b4-945a-4b5f372724d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:45 crc kubenswrapper[4861]: I0310 19:09:45.997240 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data" (OuterVolumeSpecName: "config-data") pod "f3d9436f-f09c-45b4-945a-4b5f372724d6" (UID: "f3d9436f-f09c-45b4-945a-4b5f372724d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.059551 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.059583 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.059592 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d9436f-f09c-45b4-945a-4b5f372724d6-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.059600 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4ck8\" (UniqueName: \"kubernetes.io/projected/f3d9436f-f09c-45b4-945a-4b5f372724d6-kube-api-access-l4ck8\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.413791 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qw5zr" event={"ID":"f3d9436f-f09c-45b4-945a-4b5f372724d6","Type":"ContainerDied","Data":"2a07d0a9092d8cc5f27010e43735111b8d21b4408ad88f4053292628ef656ecc"} Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.413831 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a07d0a9092d8cc5f27010e43735111b8d21b4408ad88f4053292628ef656ecc" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.413853 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qw5zr" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.811559 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:46 crc kubenswrapper[4861]: E0310 19:09:46.812251 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" containerName="mariadb-account-create-update" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812272 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" containerName="mariadb-account-create-update" Mar 10 19:09:46 crc kubenswrapper[4861]: E0310 19:09:46.812297 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d9436f-f09c-45b4-945a-4b5f372724d6" containerName="glance-db-sync" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812304 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d9436f-f09c-45b4-945a-4b5f372724d6" containerName="glance-db-sync" Mar 10 19:09:46 crc kubenswrapper[4861]: E0310 19:09:46.812314 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45966ce2-f555-418c-bcdd-005388bb0938" containerName="ovn-config" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812320 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="45966ce2-f555-418c-bcdd-005388bb0938" containerName="ovn-config" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812459 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="45966ce2-f555-418c-bcdd-005388bb0938" containerName="ovn-config" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812472 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d9436f-f09c-45b4-945a-4b5f372724d6" containerName="glance-db-sync" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.812485 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" containerName="mariadb-account-create-update" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.820098 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.829475 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.874577 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726f8\" (UniqueName: \"kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.874643 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.874672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.874724 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.874765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.975864 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.975963 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726f8\" (UniqueName: \"kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.976008 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.976036 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.976070 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.977035 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.977057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.977284 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.977256 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:46 crc kubenswrapper[4861]: I0310 19:09:46.996801 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726f8\" (UniqueName: \"kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8\") pod \"dnsmasq-dns-7f58d6bb6f-xlfwm\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:47 crc kubenswrapper[4861]: I0310 19:09:47.136762 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:47 crc kubenswrapper[4861]: I0310 19:09:47.589484 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:48 crc kubenswrapper[4861]: I0310 19:09:48.429543 4861 generic.go:334] "Generic (PLEG): container finished" podID="4d90cb21-0892-4201-90f0-b08526ab3490" containerID="9efe7bf2eefef11607cf925eb301d523946da8f9620682a3bfe35f6d01e97bad" exitCode=0 Mar 10 19:09:48 crc kubenswrapper[4861]: I0310 19:09:48.429637 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" event={"ID":"4d90cb21-0892-4201-90f0-b08526ab3490","Type":"ContainerDied","Data":"9efe7bf2eefef11607cf925eb301d523946da8f9620682a3bfe35f6d01e97bad"} Mar 10 19:09:48 crc kubenswrapper[4861]: I0310 19:09:48.429855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" event={"ID":"4d90cb21-0892-4201-90f0-b08526ab3490","Type":"ContainerStarted","Data":"e1a524f1248ecd8ccc6e71339c05668b16281024c02b7f09382a3306fb94ccf0"} Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.131906 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.437912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" event={"ID":"4d90cb21-0892-4201-90f0-b08526ab3490","Type":"ContainerStarted","Data":"f23c7134bfff69cea46626592bad9921be58f8d8dc546f2edf3d484519286feb"} Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.438071 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.442151 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4zsw2"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.443121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.454374 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zsw2"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.524287 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.524401 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv88x\" (UniqueName: \"kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.544401 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" podStartSLOduration=3.544384956 podStartE2EDuration="3.544384956s" podCreationTimestamp="2026-03-10 19:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:49.537797303 +0000 UTC m=+1333.301233263" watchObservedRunningTime="2026-03-10 19:09:49.544384956 +0000 UTC m=+1333.307820906" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.625917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.626018 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv88x\" (UniqueName: \"kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.626598 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.646279 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv88x\" (UniqueName: \"kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x\") pod \"cinder-db-create-4zsw2\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.653053 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f9e6-account-create-update-mh4cn"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.654139 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.660930 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tgzx6"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.662219 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.671584 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.707434 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f9e6-account-create-update-mh4cn"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.719991 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tgzx6"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.726993 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.727034 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rf52\" (UniqueName: \"kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.727183 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6krb\" (UniqueName: \"kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.727489 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.757198 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.828693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.828767 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.828787 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rf52\" (UniqueName: \"kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.828835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6krb\" (UniqueName: \"kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.830014 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.832904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.849676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6krb\" (UniqueName: \"kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb\") pod \"barbican-db-create-tgzx6\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.863421 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rf52\" (UniqueName: \"kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52\") pod \"cinder-f9e6-account-create-update-mh4cn\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.956644 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7ddxj"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.958122 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.972793 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e6bf-account-create-update-4flvn"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.974097 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.980942 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.994275 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7ddxj"] Mar 10 19:09:49 crc kubenswrapper[4861]: I0310 19:09:49.999798 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.000521 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2tvlz"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.001435 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.007616 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.009328 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.009477 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.009508 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.009539 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xllsk" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.028484 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e6bf-account-create-update-4flvn"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031600 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746h5\" (UniqueName: \"kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031644 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crpsp\" (UniqueName: \"kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031682 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031734 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.031847 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbltq\" (UniqueName: \"kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.045859 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2tvlz"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.133990 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134057 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134166 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbltq\" (UniqueName: \"kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134209 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746h5\" (UniqueName: \"kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134228 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134246 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crpsp\" (UniqueName: \"kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134282 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.134788 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.135774 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.138495 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.141223 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.155765 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746h5\" (UniqueName: \"kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5\") pod \"barbican-e6bf-account-create-update-4flvn\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.156886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crpsp\" (UniqueName: \"kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp\") pod \"keystone-db-sync-2tvlz\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.166660 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbltq\" (UniqueName: \"kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq\") pod \"neutron-db-create-7ddxj\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.168833 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7890-account-create-update-2c6b8"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.169821 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.172871 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.175422 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7890-account-create-update-2c6b8"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.235804 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.235884 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4sqw\" (UniqueName: \"kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.286073 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.294759 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.325605 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.337590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4sqw\" (UniqueName: \"kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.338789 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.338852 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.344310 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zsw2"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.354493 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4sqw\" (UniqueName: \"kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw\") pod \"neutron-7890-account-create-update-2c6b8\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: W0310 19:09:50.370281 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b2a596e_34ef_40d8_abd9_a8145086a6b0.slice/crio-63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264 WatchSource:0}: Error finding container 63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264: Status 404 returned error can't find the container with id 63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264 Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.459808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zsw2" event={"ID":"4b2a596e-34ef-40d8-abd9-a8145086a6b0","Type":"ContainerStarted","Data":"63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264"} Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.483400 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.493753 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f9e6-account-create-update-mh4cn"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.536254 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tgzx6"] Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.756657 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e6bf-account-create-update-4flvn"] Mar 10 19:09:50 crc kubenswrapper[4861]: W0310 19:09:50.778407 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6982db5b_c829_4309_897e_27fd0b3f2d6f.slice/crio-74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07 WatchSource:0}: Error finding container 74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07: Status 404 returned error can't find the container with id 74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07 Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.878218 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7ddxj"] Mar 10 19:09:50 crc kubenswrapper[4861]: W0310 19:09:50.887374 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488ab37a_5f47_4ea7_a5d4_7ed0780f7d4b.slice/crio-ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a WatchSource:0}: Error finding container ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a: Status 404 returned error can't find the container with id ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a Mar 10 19:09:50 crc kubenswrapper[4861]: I0310 19:09:50.973276 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2tvlz"] Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.056194 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7890-account-create-update-2c6b8"] Mar 10 19:09:51 crc kubenswrapper[4861]: W0310 19:09:51.103349 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e051309_5b38_4162_915b_7591f96ccddf.slice/crio-32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695 WatchSource:0}: Error finding container 32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695: Status 404 returned error can't find the container with id 32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.467571 4861 generic.go:334] "Generic (PLEG): container finished" podID="488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" containerID="16ca45cd16cc1584ad8cef9a1b1b8da9c99f78ab005875f8da178f12e538625d" exitCode=0 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.467669 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7ddxj" event={"ID":"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b","Type":"ContainerDied","Data":"16ca45cd16cc1584ad8cef9a1b1b8da9c99f78ab005875f8da178f12e538625d"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.467701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7ddxj" event={"ID":"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b","Type":"ContainerStarted","Data":"ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.469442 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b2a596e-34ef-40d8-abd9-a8145086a6b0" containerID="960d469098c306d03a004474f14e0fa70ddd0b37ac373fac70d6eddda456c35f" exitCode=0 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.469500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zsw2" event={"ID":"4b2a596e-34ef-40d8-abd9-a8145086a6b0","Type":"ContainerDied","Data":"960d469098c306d03a004474f14e0fa70ddd0b37ac373fac70d6eddda456c35f"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.471186 4861 generic.go:334] "Generic (PLEG): container finished" podID="6982db5b-c829-4309-897e-27fd0b3f2d6f" containerID="69bafd7dad1f5a2b95d84e1f5051045a4ea3a3973fecf9fd65590141943352de" exitCode=0 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.471236 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e6bf-account-create-update-4flvn" event={"ID":"6982db5b-c829-4309-897e-27fd0b3f2d6f","Type":"ContainerDied","Data":"69bafd7dad1f5a2b95d84e1f5051045a4ea3a3973fecf9fd65590141943352de"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.471253 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e6bf-account-create-update-4flvn" event={"ID":"6982db5b-c829-4309-897e-27fd0b3f2d6f","Type":"ContainerStarted","Data":"74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.472624 4861 generic.go:334] "Generic (PLEG): container finished" podID="cdee8a29-d4d1-461b-9eeb-5672c3a6e396" containerID="048bd7dd923085ffacce2a66e4d63d31afb2850a9c661487d041fc5c2b7edd59" exitCode=0 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.472686 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f9e6-account-create-update-mh4cn" event={"ID":"cdee8a29-d4d1-461b-9eeb-5672c3a6e396","Type":"ContainerDied","Data":"048bd7dd923085ffacce2a66e4d63d31afb2850a9c661487d041fc5c2b7edd59"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.472772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f9e6-account-create-update-mh4cn" event={"ID":"cdee8a29-d4d1-461b-9eeb-5672c3a6e396","Type":"ContainerStarted","Data":"b506bc9467258099fafa2e7bc44c4afef3465cec0e8c4f3ef5e28c4ccc54d54c"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.473856 4861 generic.go:334] "Generic (PLEG): container finished" podID="e735d2c0-7ac3-4706-b9cb-a53ada8a364b" containerID="abf5b871d4a2a457c99ae258708caf4c2792ff3f5f74cbe1f6603b5e8ac728b8" exitCode=0 Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.473916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tgzx6" event={"ID":"e735d2c0-7ac3-4706-b9cb-a53ada8a364b","Type":"ContainerDied","Data":"abf5b871d4a2a457c99ae258708caf4c2792ff3f5f74cbe1f6603b5e8ac728b8"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.473936 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tgzx6" event={"ID":"e735d2c0-7ac3-4706-b9cb-a53ada8a364b","Type":"ContainerStarted","Data":"3e24b52ab7f614f7260b7c38799f88b58b2433016fcc851636e96731f622f2b9"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.475226 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-2c6b8" event={"ID":"9e051309-5b38-4162-915b-7591f96ccddf","Type":"ContainerStarted","Data":"7803ce28300aa191ada57c67092ad96a7b0054358e22b987a4af6e101f44b55e"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.475290 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-2c6b8" event={"ID":"9e051309-5b38-4162-915b-7591f96ccddf","Type":"ContainerStarted","Data":"32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.476134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2tvlz" event={"ID":"7dd3c011-e602-441e-9682-213853dbb095","Type":"ContainerStarted","Data":"d8925b590c165bcab12d3d0eaab723b4a92b531e564af5d5d4741bb1847d7419"} Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.544897 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7890-account-create-update-2c6b8" podStartSLOduration=1.5448672449999998 podStartE2EDuration="1.544867245s" podCreationTimestamp="2026-03-10 19:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:51.544288769 +0000 UTC m=+1335.307724779" watchObservedRunningTime="2026-03-10 19:09:51.544867245 +0000 UTC m=+1335.308303195" Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.993657 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:09:51 crc kubenswrapper[4861]: I0310 19:09:51.993722 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:09:52 crc kubenswrapper[4861]: I0310 19:09:52.491688 4861 generic.go:334] "Generic (PLEG): container finished" podID="9e051309-5b38-4162-915b-7591f96ccddf" containerID="7803ce28300aa191ada57c67092ad96a7b0054358e22b987a4af6e101f44b55e" exitCode=0 Mar 10 19:09:52 crc kubenswrapper[4861]: I0310 19:09:52.492043 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-2c6b8" event={"ID":"9e051309-5b38-4162-915b-7591f96ccddf","Type":"ContainerDied","Data":"7803ce28300aa191ada57c67092ad96a7b0054358e22b987a4af6e101f44b55e"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.524631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e6bf-account-create-update-4flvn" event={"ID":"6982db5b-c829-4309-897e-27fd0b3f2d6f","Type":"ContainerDied","Data":"74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.525159 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c80322f11497fe268f24aa8c4faba227f3f1ae89ff8274c54480ce9a78ea07" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.528536 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f9e6-account-create-update-mh4cn" event={"ID":"cdee8a29-d4d1-461b-9eeb-5672c3a6e396","Type":"ContainerDied","Data":"b506bc9467258099fafa2e7bc44c4afef3465cec0e8c4f3ef5e28c4ccc54d54c"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.528598 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b506bc9467258099fafa2e7bc44c4afef3465cec0e8c4f3ef5e28c4ccc54d54c" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.530715 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tgzx6" event={"ID":"e735d2c0-7ac3-4706-b9cb-a53ada8a364b","Type":"ContainerDied","Data":"3e24b52ab7f614f7260b7c38799f88b58b2433016fcc851636e96731f622f2b9"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.530735 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e24b52ab7f614f7260b7c38799f88b58b2433016fcc851636e96731f622f2b9" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.533975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-2c6b8" event={"ID":"9e051309-5b38-4162-915b-7591f96ccddf","Type":"ContainerDied","Data":"32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.534296 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32df14f658c853b7d921fd033da554ef6f306723695badfce4a033544eebd695" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.536232 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7ddxj" event={"ID":"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b","Type":"ContainerDied","Data":"ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.536263 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7139ff70e51dcfd09055b31930bfb5378ba4f474c12b5226d3b2c3b313397a" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.542731 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zsw2" event={"ID":"4b2a596e-34ef-40d8-abd9-a8145086a6b0","Type":"ContainerDied","Data":"63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264"} Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.542752 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cf4bacfccd76f5ee8171319e14ab4217e64fe4b9d626f12414aa59ad3c7264" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.703351 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.718828 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.751233 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts\") pod \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.751410 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6krb\" (UniqueName: \"kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb\") pod \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\" (UID: \"e735d2c0-7ac3-4706-b9cb-a53ada8a364b\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.751764 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e735d2c0-7ac3-4706-b9cb-a53ada8a364b" (UID: "e735d2c0-7ac3-4706-b9cb-a53ada8a364b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.751857 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.761136 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb" (OuterVolumeSpecName: "kube-api-access-p6krb") pod "e735d2c0-7ac3-4706-b9cb-a53ada8a364b" (UID: "e735d2c0-7ac3-4706-b9cb-a53ada8a364b"). InnerVolumeSpecName "kube-api-access-p6krb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.773437 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.797301 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.804451 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.817876 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853554 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbltq\" (UniqueName: \"kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq\") pod \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853653 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv88x\" (UniqueName: \"kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x\") pod \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853738 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746h5\" (UniqueName: \"kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5\") pod \"6982db5b-c829-4309-897e-27fd0b3f2d6f\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853782 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts\") pod \"9e051309-5b38-4162-915b-7591f96ccddf\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853824 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts\") pod \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\" (UID: \"4b2a596e-34ef-40d8-abd9-a8145086a6b0\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853891 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rf52\" (UniqueName: \"kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52\") pod \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.853930 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts\") pod \"6982db5b-c829-4309-897e-27fd0b3f2d6f\" (UID: \"6982db5b-c829-4309-897e-27fd0b3f2d6f\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854006 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4sqw\" (UniqueName: \"kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw\") pod \"9e051309-5b38-4162-915b-7591f96ccddf\" (UID: \"9e051309-5b38-4162-915b-7591f96ccddf\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854044 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts\") pod \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\" (UID: \"cdee8a29-d4d1-461b-9eeb-5672c3a6e396\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts\") pod \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\" (UID: \"488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b\") " Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854340 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e051309-5b38-4162-915b-7591f96ccddf" (UID: "9e051309-5b38-4162-915b-7591f96ccddf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854449 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b2a596e-34ef-40d8-abd9-a8145086a6b0" (UID: "4b2a596e-34ef-40d8-abd9-a8145086a6b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854771 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6982db5b-c829-4309-897e-27fd0b3f2d6f" (UID: "6982db5b-c829-4309-897e-27fd0b3f2d6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854821 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e051309-5b38-4162-915b-7591f96ccddf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854837 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b2a596e-34ef-40d8-abd9-a8145086a6b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854848 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6krb\" (UniqueName: \"kubernetes.io/projected/e735d2c0-7ac3-4706-b9cb-a53ada8a364b-kube-api-access-p6krb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.854918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" (UID: "488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.855802 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdee8a29-d4d1-461b-9eeb-5672c3a6e396" (UID: "cdee8a29-d4d1-461b-9eeb-5672c3a6e396"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.862422 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5" (OuterVolumeSpecName: "kube-api-access-746h5") pod "6982db5b-c829-4309-897e-27fd0b3f2d6f" (UID: "6982db5b-c829-4309-897e-27fd0b3f2d6f"). InnerVolumeSpecName "kube-api-access-746h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.863280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52" (OuterVolumeSpecName: "kube-api-access-5rf52") pod "cdee8a29-d4d1-461b-9eeb-5672c3a6e396" (UID: "cdee8a29-d4d1-461b-9eeb-5672c3a6e396"). InnerVolumeSpecName "kube-api-access-5rf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.865870 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw" (OuterVolumeSpecName: "kube-api-access-j4sqw") pod "9e051309-5b38-4162-915b-7591f96ccddf" (UID: "9e051309-5b38-4162-915b-7591f96ccddf"). InnerVolumeSpecName "kube-api-access-j4sqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.866006 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq" (OuterVolumeSpecName: "kube-api-access-fbltq") pod "488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" (UID: "488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b"). InnerVolumeSpecName "kube-api-access-fbltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.866432 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x" (OuterVolumeSpecName: "kube-api-access-nv88x") pod "4b2a596e-34ef-40d8-abd9-a8145086a6b0" (UID: "4b2a596e-34ef-40d8-abd9-a8145086a6b0"). InnerVolumeSpecName "kube-api-access-nv88x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955784 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955817 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbltq\" (UniqueName: \"kubernetes.io/projected/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b-kube-api-access-fbltq\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955830 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv88x\" (UniqueName: \"kubernetes.io/projected/4b2a596e-34ef-40d8-abd9-a8145086a6b0-kube-api-access-nv88x\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955839 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746h5\" (UniqueName: \"kubernetes.io/projected/6982db5b-c829-4309-897e-27fd0b3f2d6f-kube-api-access-746h5\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955849 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rf52\" (UniqueName: \"kubernetes.io/projected/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-kube-api-access-5rf52\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955858 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6982db5b-c829-4309-897e-27fd0b3f2d6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955866 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4sqw\" (UniqueName: \"kubernetes.io/projected/9e051309-5b38-4162-915b-7591f96ccddf-kube-api-access-j4sqw\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:55 crc kubenswrapper[4861]: I0310 19:09:55.955875 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdee8a29-d4d1-461b-9eeb-5672c3a6e396-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.553432 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2tvlz" event={"ID":"7dd3c011-e602-441e-9682-213853dbb095","Type":"ContainerStarted","Data":"ec276b3d208faaad4a1ba96c2620fedc6e5b7e5ca9dca86369c390bbfb6ddfdf"} Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563316 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7ddxj" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563422 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-4flvn" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563746 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zsw2" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563787 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tgzx6" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"11a99a6a7e0db5891a44034177a04d1985fa20e0f2d3290918a14ddd4959f420"} Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"55fdaa1d03e36a25dbe23991f3c3f1f316d93446c624fccb0f36484e914ef862"} Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563889 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"2454ccf10166b5528edb6a3e86ffa92bbe7f052583e5e86477cc4d4a7bbd47cb"} Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.563906 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerStarted","Data":"63a9bdbdc82026e9332fbae1efef4878f557997e81cdaf15c41418eb885ff288"} Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.564240 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-2c6b8" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.564341 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f9e6-account-create-update-mh4cn" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.576162 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2tvlz" podStartSLOduration=3.012511134 podStartE2EDuration="7.576139522s" podCreationTimestamp="2026-03-10 19:09:49 +0000 UTC" firstStartedPulling="2026-03-10 19:09:50.974014477 +0000 UTC m=+1334.737450427" lastFinishedPulling="2026-03-10 19:09:55.537642855 +0000 UTC m=+1339.301078815" observedRunningTime="2026-03-10 19:09:56.57137793 +0000 UTC m=+1340.334813900" watchObservedRunningTime="2026-03-10 19:09:56.576139522 +0000 UTC m=+1340.339575522" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.637804 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.510504437 podStartE2EDuration="52.637786655s" podCreationTimestamp="2026-03-10 19:09:04 +0000 UTC" firstStartedPulling="2026-03-10 19:09:22.393337463 +0000 UTC m=+1306.156773423" lastFinishedPulling="2026-03-10 19:09:55.520619671 +0000 UTC m=+1339.284055641" observedRunningTime="2026-03-10 19:09:56.634911645 +0000 UTC m=+1340.398347635" watchObservedRunningTime="2026-03-10 19:09:56.637786655 +0000 UTC m=+1340.401222605" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.940681 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.941081 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="dnsmasq-dns" containerID="cri-o://f23c7134bfff69cea46626592bad9921be58f8d8dc546f2edf3d484519286feb" gracePeriod=10 Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.945943 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975435 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975812 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6982db5b-c829-4309-897e-27fd0b3f2d6f" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975835 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6982db5b-c829-4309-897e-27fd0b3f2d6f" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975857 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2a596e-34ef-40d8-abd9-a8145086a6b0" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975868 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2a596e-34ef-40d8-abd9-a8145086a6b0" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975881 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e051309-5b38-4162-915b-7591f96ccddf" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975890 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e051309-5b38-4162-915b-7591f96ccddf" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975904 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975912 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975946 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e735d2c0-7ac3-4706-b9cb-a53ada8a364b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975955 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e735d2c0-7ac3-4706-b9cb-a53ada8a364b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: E0310 19:09:56.975971 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdee8a29-d4d1-461b-9eeb-5672c3a6e396" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.975978 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdee8a29-d4d1-461b-9eeb-5672c3a6e396" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976177 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e051309-5b38-4162-915b-7591f96ccddf" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976195 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdee8a29-d4d1-461b-9eeb-5672c3a6e396" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976215 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6982db5b-c829-4309-897e-27fd0b3f2d6f" containerName="mariadb-account-create-update" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976227 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2a596e-34ef-40d8-abd9-a8145086a6b0" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976240 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e735d2c0-7ac3-4706-b9cb-a53ada8a364b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.976252 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" containerName="mariadb-database-create" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.977363 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.979426 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 10 19:09:56 crc kubenswrapper[4861]: I0310 19:09:56.992844 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.086510 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.086580 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.086616 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.086917 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.086999 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhnv\" (UniqueName: \"kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.087117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.137491 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.188555 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.188637 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.189903 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.190264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.190368 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhnv\" (UniqueName: \"kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.190432 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.190503 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.190518 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.191990 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.192057 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.192903 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.224868 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhnv\" (UniqueName: \"kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv\") pod \"dnsmasq-dns-75c886f8b5-v4qlh\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.341689 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.604148 4861 generic.go:334] "Generic (PLEG): container finished" podID="4d90cb21-0892-4201-90f0-b08526ab3490" containerID="f23c7134bfff69cea46626592bad9921be58f8d8dc546f2edf3d484519286feb" exitCode=0 Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.604197 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" event={"ID":"4d90cb21-0892-4201-90f0-b08526ab3490","Type":"ContainerDied","Data":"f23c7134bfff69cea46626592bad9921be58f8d8dc546f2edf3d484519286feb"} Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.682297 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.861853 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.901834 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config\") pod \"4d90cb21-0892-4201-90f0-b08526ab3490\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.901969 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb\") pod \"4d90cb21-0892-4201-90f0-b08526ab3490\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.902031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb\") pod \"4d90cb21-0892-4201-90f0-b08526ab3490\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.902055 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc\") pod \"4d90cb21-0892-4201-90f0-b08526ab3490\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.902101 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726f8\" (UniqueName: \"kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8\") pod \"4d90cb21-0892-4201-90f0-b08526ab3490\" (UID: \"4d90cb21-0892-4201-90f0-b08526ab3490\") " Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.911829 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8" (OuterVolumeSpecName: "kube-api-access-726f8") pod "4d90cb21-0892-4201-90f0-b08526ab3490" (UID: "4d90cb21-0892-4201-90f0-b08526ab3490"). InnerVolumeSpecName "kube-api-access-726f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.978528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d90cb21-0892-4201-90f0-b08526ab3490" (UID: "4d90cb21-0892-4201-90f0-b08526ab3490"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:57 crc kubenswrapper[4861]: I0310 19:09:57.993342 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d90cb21-0892-4201-90f0-b08526ab3490" (UID: "4d90cb21-0892-4201-90f0-b08526ab3490"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.005774 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726f8\" (UniqueName: \"kubernetes.io/projected/4d90cb21-0892-4201-90f0-b08526ab3490-kube-api-access-726f8\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.005807 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.005816 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.018235 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config" (OuterVolumeSpecName: "config") pod "4d90cb21-0892-4201-90f0-b08526ab3490" (UID: "4d90cb21-0892-4201-90f0-b08526ab3490"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.029974 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d90cb21-0892-4201-90f0-b08526ab3490" (UID: "4d90cb21-0892-4201-90f0-b08526ab3490"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.107587 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.107622 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d90cb21-0892-4201-90f0-b08526ab3490-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.613028 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.613031 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xlfwm" event={"ID":"4d90cb21-0892-4201-90f0-b08526ab3490","Type":"ContainerDied","Data":"e1a524f1248ecd8ccc6e71339c05668b16281024c02b7f09382a3306fb94ccf0"} Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.613086 4861 scope.go:117] "RemoveContainer" containerID="f23c7134bfff69cea46626592bad9921be58f8d8dc546f2edf3d484519286feb" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.615554 4861 generic.go:334] "Generic (PLEG): container finished" podID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerID="48b25e23776ff4d9e34f3bdc68c1cd16379a049d6d13bef32e35403bf088e484" exitCode=0 Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.615619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" event={"ID":"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9","Type":"ContainerDied","Data":"48b25e23776ff4d9e34f3bdc68c1cd16379a049d6d13bef32e35403bf088e484"} Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.615924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" event={"ID":"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9","Type":"ContainerStarted","Data":"b4f9bdca3cd506651f665d3f6f1452350dcb624f268cba2a53db829a553e5906"} Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.630181 4861 scope.go:117] "RemoveContainer" containerID="9efe7bf2eefef11607cf925eb301d523946da8f9620682a3bfe35f6d01e97bad" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.657350 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.663197 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xlfwm"] Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.789926 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:09:58 crc kubenswrapper[4861]: I0310 19:09:58.967654 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" path="/var/lib/kubelet/pods/4d90cb21-0892-4201-90f0-b08526ab3490/volumes" Mar 10 19:09:59 crc kubenswrapper[4861]: I0310 19:09:59.669159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" event={"ID":"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9","Type":"ContainerStarted","Data":"43f123c0581adc24eaa058b8a1d394322a8ea88fb61d0307abdf6ca7abcab134"} Mar 10 19:09:59 crc kubenswrapper[4861]: I0310 19:09:59.669433 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:09:59 crc kubenswrapper[4861]: I0310 19:09:59.671243 4861 generic.go:334] "Generic (PLEG): container finished" podID="7dd3c011-e602-441e-9682-213853dbb095" containerID="ec276b3d208faaad4a1ba96c2620fedc6e5b7e5ca9dca86369c390bbfb6ddfdf" exitCode=0 Mar 10 19:09:59 crc kubenswrapper[4861]: I0310 19:09:59.671308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2tvlz" event={"ID":"7dd3c011-e602-441e-9682-213853dbb095","Type":"ContainerDied","Data":"ec276b3d208faaad4a1ba96c2620fedc6e5b7e5ca9dca86369c390bbfb6ddfdf"} Mar 10 19:09:59 crc kubenswrapper[4861]: I0310 19:09:59.694569 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" podStartSLOduration=3.694550646 podStartE2EDuration="3.694550646s" podCreationTimestamp="2026-03-10 19:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:09:59.687340526 +0000 UTC m=+1343.450776496" watchObservedRunningTime="2026-03-10 19:09:59.694550646 +0000 UTC m=+1343.457986616" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.132916 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552830-7hhcb"] Mar 10 19:10:00 crc kubenswrapper[4861]: E0310 19:10:00.133527 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="dnsmasq-dns" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.133593 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="dnsmasq-dns" Mar 10 19:10:00 crc kubenswrapper[4861]: E0310 19:10:00.133650 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="init" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.133699 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="init" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.133933 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d90cb21-0892-4201-90f0-b08526ab3490" containerName="dnsmasq-dns" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.134530 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.137901 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.138110 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.138124 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.143186 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552830-7hhcb"] Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.276862 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfm4\" (UniqueName: \"kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4\") pod \"auto-csr-approver-29552830-7hhcb\" (UID: \"4d9397cb-c907-4d9f-ae4c-3f05de941f27\") " pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.377904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzfm4\" (UniqueName: \"kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4\") pod \"auto-csr-approver-29552830-7hhcb\" (UID: \"4d9397cb-c907-4d9f-ae4c-3f05de941f27\") " pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.407988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzfm4\" (UniqueName: \"kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4\") pod \"auto-csr-approver-29552830-7hhcb\" (UID: \"4d9397cb-c907-4d9f-ae4c-3f05de941f27\") " pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.451121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:00 crc kubenswrapper[4861]: I0310 19:10:00.809478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552830-7hhcb"] Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.102567 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.300631 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crpsp\" (UniqueName: \"kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp\") pod \"7dd3c011-e602-441e-9682-213853dbb095\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.300700 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle\") pod \"7dd3c011-e602-441e-9682-213853dbb095\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.300808 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data\") pod \"7dd3c011-e602-441e-9682-213853dbb095\" (UID: \"7dd3c011-e602-441e-9682-213853dbb095\") " Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.308179 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp" (OuterVolumeSpecName: "kube-api-access-crpsp") pod "7dd3c011-e602-441e-9682-213853dbb095" (UID: "7dd3c011-e602-441e-9682-213853dbb095"). InnerVolumeSpecName "kube-api-access-crpsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.325213 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dd3c011-e602-441e-9682-213853dbb095" (UID: "7dd3c011-e602-441e-9682-213853dbb095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.369760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data" (OuterVolumeSpecName: "config-data") pod "7dd3c011-e602-441e-9682-213853dbb095" (UID: "7dd3c011-e602-441e-9682-213853dbb095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.402954 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crpsp\" (UniqueName: \"kubernetes.io/projected/7dd3c011-e602-441e-9682-213853dbb095-kube-api-access-crpsp\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.403007 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.403025 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3c011-e602-441e-9682-213853dbb095-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.693273 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" event={"ID":"4d9397cb-c907-4d9f-ae4c-3f05de941f27","Type":"ContainerStarted","Data":"b640dea0351e1e377e74a75bc306f443b4336ff3cfddf3b22fe9e66782f4286c"} Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.696213 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2tvlz" event={"ID":"7dd3c011-e602-441e-9682-213853dbb095","Type":"ContainerDied","Data":"d8925b590c165bcab12d3d0eaab723b4a92b531e564af5d5d4741bb1847d7419"} Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.696251 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2tvlz" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.696260 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8925b590c165bcab12d3d0eaab723b4a92b531e564af5d5d4741bb1847d7419" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.928438 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.928790 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="dnsmasq-dns" containerID="cri-o://43f123c0581adc24eaa058b8a1d394322a8ea88fb61d0307abdf6ca7abcab134" gracePeriod=10 Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.953044 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hf9nx"] Mar 10 19:10:01 crc kubenswrapper[4861]: E0310 19:10:01.953567 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd3c011-e602-441e-9682-213853dbb095" containerName="keystone-db-sync" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.953590 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd3c011-e602-441e-9682-213853dbb095" containerName="keystone-db-sync" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.953788 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd3c011-e602-441e-9682-213853dbb095" containerName="keystone-db-sync" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.954449 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.962038 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.962185 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.962366 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xllsk" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.962838 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.962947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.971863 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hf9nx"] Mar 10 19:10:01 crc kubenswrapper[4861]: I0310 19:10:01.985184 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.053068 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.054963 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142608 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142663 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142700 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whth2\" (UniqueName: \"kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142785 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142814 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mskj8\" (UniqueName: \"kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142888 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.142907 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.177184 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2x6cv"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.181759 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.184754 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2x6cv"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.188691 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.188842 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sd6lp" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.189228 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.207597 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-snwft"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.208789 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.215093 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-phrmf" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.215277 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.215390 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.215426 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-snwft"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.243905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.243950 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.243973 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.243992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244008 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mskj8\" (UniqueName: \"kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244042 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244067 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244082 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244099 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244152 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244179 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.244204 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whth2\" (UniqueName: \"kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.245137 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.245152 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.245796 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.246071 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.246552 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.252189 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.264953 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.265401 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.266236 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.267920 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whth2\" (UniqueName: \"kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.269487 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mskj8\" (UniqueName: \"kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8\") pod \"dnsmasq-dns-5985c59c55-pnngv\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.276077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys\") pod \"keystone-bootstrap-hf9nx\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.302772 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-j5zv5"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.303837 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.307437 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.307618 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-r8n4d" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.308015 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.329766 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j5zv5"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.337341 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.337974 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370658 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370721 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfm6h\" (UniqueName: \"kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370886 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370912 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.370931 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.371425 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7sw\" (UniqueName: \"kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.371464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.371487 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.401701 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.409164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.422747 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.437456 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.447117 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.455659 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.463063 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.463317 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.463911 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7g95t"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.464886 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.468100 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.468293 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4kqnb" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.468491 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7sw\" (UniqueName: \"kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472699 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472733 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472756 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472838 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwh2h\" (UniqueName: \"kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472859 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfm6h\" (UniqueName: \"kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.472914 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.482166 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.485491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.485627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.487968 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7g95t"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.494584 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.505142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.505890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.512167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.514262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7sw\" (UniqueName: \"kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw\") pod \"neutron-db-sync-snwft\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.531820 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfm6h\" (UniqueName: \"kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h\") pod \"cinder-db-sync-2x6cv\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.566499 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.574805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fhgw\" (UniqueName: \"kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.574891 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.574921 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.574942 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.574976 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575011 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575092 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575122 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575147 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtwc\" (UniqueName: \"kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575357 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575406 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bd7n\" (UniqueName: \"kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575481 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575526 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575588 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575613 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575654 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwh2h\" (UniqueName: \"kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.575673 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.579153 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.580270 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.605818 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwh2h\" (UniqueName: \"kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h\") pod \"barbican-db-sync-j5zv5\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.677234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fhgw\" (UniqueName: \"kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.677523 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.677545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.677560 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678221 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678268 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678309 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678652 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678681 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678758 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twtwc\" (UniqueName: \"kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678790 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678850 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bd7n\" (UniqueName: \"kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678867 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678879 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678891 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.678941 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.679640 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.679742 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.680219 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.680507 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.681113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.679455 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.681237 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.681699 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.690095 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.691339 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.691643 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.692798 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.693031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.702381 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twtwc\" (UniqueName: \"kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.702438 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts\") pod \"placement-db-sync-7g95t\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.702567 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fhgw\" (UniqueName: \"kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw\") pod \"dnsmasq-dns-ccd7c9f8f-vd6rh\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.702904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.703000 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bd7n\" (UniqueName: \"kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n\") pod \"ceilometer-0\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.715933 4861 generic.go:334] "Generic (PLEG): container finished" podID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerID="43f123c0581adc24eaa058b8a1d394322a8ea88fb61d0307abdf6ca7abcab134" exitCode=0 Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.715976 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" event={"ID":"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9","Type":"ContainerDied","Data":"43f123c0581adc24eaa058b8a1d394322a8ea88fb61d0307abdf6ca7abcab134"} Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.716002 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" event={"ID":"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9","Type":"ContainerDied","Data":"b4f9bdca3cd506651f665d3f6f1452350dcb624f268cba2a53db829a553e5906"} Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.716012 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f9bdca3cd506651f665d3f6f1452350dcb624f268cba2a53db829a553e5906" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.716576 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.721288 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.828262 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885329 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhnv\" (UniqueName: \"kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885540 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.885651 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb\") pod \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\" (UID: \"dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9\") " Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.890122 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv" (OuterVolumeSpecName: "kube-api-access-cbhnv") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "kube-api-access-cbhnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.908437 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.915127 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.924346 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.947857 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.952826 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hf9nx"] Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.962003 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.989412 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.989637 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.989725 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhnv\" (UniqueName: \"kubernetes.io/projected/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-kube-api-access-cbhnv\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:02 crc kubenswrapper[4861]: I0310 19:10:02.994378 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:02.998691 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.002365 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.011131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config" (OuterVolumeSpecName: "config") pod "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" (UID: "dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:03 crc kubenswrapper[4861]: W0310 19:10:03.016884 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc8118e6_fec9_49af_aa6f_a36aaddebdb9.slice/crio-54a9f513dc7fcc1df264f3bc4931112f1d1abfa18ad47c14e885dd77fc92642a WatchSource:0}: Error finding container 54a9f513dc7fcc1df264f3bc4931112f1d1abfa18ad47c14e885dd77fc92642a: Status 404 returned error can't find the container with id 54a9f513dc7fcc1df264f3bc4931112f1d1abfa18ad47c14e885dd77fc92642a Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.081125 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:03 crc kubenswrapper[4861]: E0310 19:10:03.081566 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="init" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.081594 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="init" Mar 10 19:10:03 crc kubenswrapper[4861]: E0310 19:10:03.081609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="dnsmasq-dns" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.081615 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="dnsmasq-dns" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.091848 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.091878 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.091889 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.114681 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" containerName="dnsmasq-dns" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.136528 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.145610 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6lrgm" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.148185 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-snwft"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.150776 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.150966 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.151107 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.185960 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.233870 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.235458 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.240555 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.246767 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295579 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmm7\" (UniqueName: \"kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295753 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295778 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295827 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295845 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.295880 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.320784 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.364917 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7g95t"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.398987 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399065 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7nrz\" (UniqueName: \"kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399107 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399134 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399177 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmm7\" (UniqueName: \"kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399198 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399289 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399323 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399343 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399360 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399414 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399434 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.399461 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.400018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.400266 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.402471 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.426444 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.427627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.431003 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.433375 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.441519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmm7\" (UniqueName: \"kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.460000 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2x6cv"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500659 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500784 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500863 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7nrz\" (UniqueName: \"kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500889 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.500920 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.502897 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.504076 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.504552 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.513886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.516297 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.518800 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.520151 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7nrz\" (UniqueName: \"kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.520881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: W0310 19:10:03.526878 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6369ade3_a8af_44b7_94be_736523f99512.slice/crio-cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12 WatchSource:0}: Error finding container cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12: Status 404 returned error can't find the container with id cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12 Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.530096 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.548102 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.593330 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.685233 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:03 crc kubenswrapper[4861]: W0310 19:10:03.712908 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7068adc7_5930_4999_99ec_eb8ced501cd2.slice/crio-33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7 WatchSource:0}: Error finding container 33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7: Status 404 returned error can't find the container with id 33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7 Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.717767 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j5zv5"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.751986 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j5zv5" event={"ID":"7068adc7-5930-4999-99ec-eb8ced501cd2","Type":"ContainerStarted","Data":"33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.755800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2x6cv" event={"ID":"6369ade3-a8af-44b7-94be-736523f99512","Type":"ContainerStarted","Data":"cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.757970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hf9nx" event={"ID":"5686eba7-5925-4ca9-9530-5f5a152c0eb0","Type":"ContainerStarted","Data":"06d0f6cf0f2d406fc184a0557e435f68af1044052fead949e95e45b22d9abb5e"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.758016 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hf9nx" event={"ID":"5686eba7-5925-4ca9-9530-5f5a152c0eb0","Type":"ContainerStarted","Data":"6caf801fea4f2b363c097797ba8c640dac9015213c155667798dd848f0871188"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.772857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.782541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-snwft" event={"ID":"33d24034-d5c6-486f-a637-23724e8b5225","Type":"ContainerStarted","Data":"bfe94186227b0b9e8bc7a1cfe95e79e6ca9f193df9adf958d4c7dce4f1d7a079"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.788522 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" event={"ID":"bc8118e6-fec9-49af-aa6f-a36aaddebdb9","Type":"ContainerStarted","Data":"54a9f513dc7fcc1df264f3bc4931112f1d1abfa18ad47c14e885dd77fc92642a"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.796902 4861 generic.go:334] "Generic (PLEG): container finished" podID="4d9397cb-c907-4d9f-ae4c-3f05de941f27" containerID="cb2314e50a6b4f63e3306cb1d3eefcb1635b36a5b5a0d73ed50ada3f1b695028" exitCode=0 Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.797081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" event={"ID":"4d9397cb-c907-4d9f-ae4c-3f05de941f27","Type":"ContainerDied","Data":"cb2314e50a6b4f63e3306cb1d3eefcb1635b36a5b5a0d73ed50ada3f1b695028"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.797098 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hf9nx" podStartSLOduration=2.797073026 podStartE2EDuration="2.797073026s" podCreationTimestamp="2026-03-10 19:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:03.782898542 +0000 UTC m=+1347.546334502" watchObservedRunningTime="2026-03-10 19:10:03.797073026 +0000 UTC m=+1347.560508986" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.802634 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7g95t" event={"ID":"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1","Type":"ContainerStarted","Data":"f74d1b927d23a1c7bb74acb07899fc2d918a8bbdd8cb562982fe8632e0193636"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.804173 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c886f8b5-v4qlh" Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.804380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" event={"ID":"9fb93010-cc9e-4984-91db-e36b6605fa2b","Type":"ContainerStarted","Data":"6a1c322b259c6039585921b7d5d4af02af93029c4416ccd693bfa3a7a49ed180"} Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.864556 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.868512 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c886f8b5-v4qlh"] Mar 10 19:10:03 crc kubenswrapper[4861]: I0310 19:10:03.920277 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.189487 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.784612 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.813848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerStarted","Data":"a102b62364d264e275abf34cfeb0ad59ac4586f6d4a2a9cf6078842b3fe34dac"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.813914 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerStarted","Data":"e68692424a58eb57128483db63e5326a4f5ce6653fd335e8f593e0dc8b44ef21"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.819236 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-snwft" event={"ID":"33d24034-d5c6-486f-a637-23724e8b5225","Type":"ContainerStarted","Data":"eed081b92c9466dcc219501d89f20fbe1648f53ae2509508a4d353b88a2bc628"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.822494 4861 generic.go:334] "Generic (PLEG): container finished" podID="bc8118e6-fec9-49af-aa6f-a36aaddebdb9" containerID="1b60ab3b4cc70f877bdfbdcf1ca53cb8e2aa4fbd5628a3d0b85b785c45ea9887" exitCode=0 Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.823755 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" event={"ID":"bc8118e6-fec9-49af-aa6f-a36aaddebdb9","Type":"ContainerDied","Data":"1b60ab3b4cc70f877bdfbdcf1ca53cb8e2aa4fbd5628a3d0b85b785c45ea9887"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.848436 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerID="1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541" exitCode=0 Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.848696 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" event={"ID":"9fb93010-cc9e-4984-91db-e36b6605fa2b","Type":"ContainerDied","Data":"1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.850485 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerStarted","Data":"aac59d8ed9f3f456c8fc0f9a7f4f667d8c750e3c0ab744c4db54c987603dc388"} Mar 10 19:10:04 crc kubenswrapper[4861]: I0310 19:10:04.870784 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-snwft" podStartSLOduration=2.870765592 podStartE2EDuration="2.870765592s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:04.84191263 +0000 UTC m=+1348.605348610" watchObservedRunningTime="2026-03-10 19:10:04.870765592 +0000 UTC m=+1348.634201542" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.102122 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9" path="/var/lib/kubelet/pods/dca977fd-c0e9-4ef9-89ab-f2cd7a9505a9/volumes" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.103603 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.136598 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.250674 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.315440 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478067 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478531 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478836 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478897 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478931 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mskj8\" (UniqueName: \"kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.478959 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb\") pod \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\" (UID: \"bc8118e6-fec9-49af-aa6f-a36aaddebdb9\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.495862 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8" (OuterVolumeSpecName: "kube-api-access-mskj8") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "kube-api-access-mskj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.508438 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.519558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.529232 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.548488 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config" (OuterVolumeSpecName: "config") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.549819 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc8118e6-fec9-49af-aa6f-a36aaddebdb9" (UID: "bc8118e6-fec9-49af-aa6f-a36aaddebdb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588237 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588267 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588276 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588284 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588293 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mskj8\" (UniqueName: \"kubernetes.io/projected/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-kube-api-access-mskj8\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.588300 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc8118e6-fec9-49af-aa6f-a36aaddebdb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.754541 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.886388 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" event={"ID":"9fb93010-cc9e-4984-91db-e36b6605fa2b","Type":"ContainerStarted","Data":"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143"} Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.887785 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.889980 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerStarted","Data":"6dee7d4d067dfe3c4d0bd3285d106d49d570a948243235aa3beb96022861c9a1"} Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.891869 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzfm4\" (UniqueName: \"kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4\") pod \"4d9397cb-c907-4d9f-ae4c-3f05de941f27\" (UID: \"4d9397cb-c907-4d9f-ae4c-3f05de941f27\") " Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.895722 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" event={"ID":"bc8118e6-fec9-49af-aa6f-a36aaddebdb9","Type":"ContainerDied","Data":"54a9f513dc7fcc1df264f3bc4931112f1d1abfa18ad47c14e885dd77fc92642a"} Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.895753 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5985c59c55-pnngv" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.895766 4861 scope.go:117] "RemoveContainer" containerID="1b60ab3b4cc70f877bdfbdcf1ca53cb8e2aa4fbd5628a3d0b85b785c45ea9887" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.898167 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4" (OuterVolumeSpecName: "kube-api-access-tzfm4") pod "4d9397cb-c907-4d9f-ae4c-3f05de941f27" (UID: "4d9397cb-c907-4d9f-ae4c-3f05de941f27"). InnerVolumeSpecName "kube-api-access-tzfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.899061 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.899242 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552830-7hhcb" event={"ID":"4d9397cb-c907-4d9f-ae4c-3f05de941f27","Type":"ContainerDied","Data":"b640dea0351e1e377e74a75bc306f443b4336ff3cfddf3b22fe9e66782f4286c"} Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.899264 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b640dea0351e1e377e74a75bc306f443b4336ff3cfddf3b22fe9e66782f4286c" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.913309 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" podStartSLOduration=3.913291911 podStartE2EDuration="3.913291911s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:05.906005259 +0000 UTC m=+1349.669441219" watchObservedRunningTime="2026-03-10 19:10:05.913291911 +0000 UTC m=+1349.676727871" Mar 10 19:10:05 crc kubenswrapper[4861]: I0310 19:10:05.994642 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzfm4\" (UniqueName: \"kubernetes.io/projected/4d9397cb-c907-4d9f-ae4c-3f05de941f27-kube-api-access-tzfm4\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.011151 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.043602 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5985c59c55-pnngv"] Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.825800 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552824-ftlrt"] Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.834801 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552824-ftlrt"] Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.910721 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerStarted","Data":"4c0efee6b4d2a500b0e035e5bb28af5a6adf89f45b2b67f736a7d63b1ccbdb20"} Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.910777 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-log" containerID="cri-o://a102b62364d264e275abf34cfeb0ad59ac4586f6d4a2a9cf6078842b3fe34dac" gracePeriod=30 Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.911092 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-httpd" containerID="cri-o://4c0efee6b4d2a500b0e035e5bb28af5a6adf89f45b2b67f736a7d63b1ccbdb20" gracePeriod=30 Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.914258 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerStarted","Data":"0e2ac1c24ac49b6216bf3b991f7b14a2adb0dc552c1e50444d805d506c0951ac"} Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.940038 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.940019692 podStartE2EDuration="4.940019692s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:06.933453639 +0000 UTC m=+1350.696889609" watchObservedRunningTime="2026-03-10 19:10:06.940019692 +0000 UTC m=+1350.703455652" Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.992829 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8118e6-fec9-49af-aa6f-a36aaddebdb9" path="/var/lib/kubelet/pods/bc8118e6-fec9-49af-aa6f-a36aaddebdb9/volumes" Mar 10 19:10:06 crc kubenswrapper[4861]: I0310 19:10:06.993311 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ad2728-5796-4d31-ab8d-2a18a41b5687" path="/var/lib/kubelet/pods/e3ad2728-5796-4d31-ab8d-2a18a41b5687/volumes" Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.956293 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerStarted","Data":"afbae3dfa5e9fe5b9a6c440b9faa889a8f21c33d29a38bdb3f3701c6c0809668"} Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.956427 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-httpd" containerID="cri-o://afbae3dfa5e9fe5b9a6c440b9faa889a8f21c33d29a38bdb3f3701c6c0809668" gracePeriod=30 Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.957359 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-log" containerID="cri-o://0e2ac1c24ac49b6216bf3b991f7b14a2adb0dc552c1e50444d805d506c0951ac" gracePeriod=30 Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.961396 4861 generic.go:334] "Generic (PLEG): container finished" podID="5686eba7-5925-4ca9-9530-5f5a152c0eb0" containerID="06d0f6cf0f2d406fc184a0557e435f68af1044052fead949e95e45b22d9abb5e" exitCode=0 Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.961445 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hf9nx" event={"ID":"5686eba7-5925-4ca9-9530-5f5a152c0eb0","Type":"ContainerDied","Data":"06d0f6cf0f2d406fc184a0557e435f68af1044052fead949e95e45b22d9abb5e"} Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.964538 4861 generic.go:334] "Generic (PLEG): container finished" podID="549d0570-3b33-4115-965c-155129434e7b" containerID="4c0efee6b4d2a500b0e035e5bb28af5a6adf89f45b2b67f736a7d63b1ccbdb20" exitCode=0 Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.964560 4861 generic.go:334] "Generic (PLEG): container finished" podID="549d0570-3b33-4115-965c-155129434e7b" containerID="a102b62364d264e275abf34cfeb0ad59ac4586f6d4a2a9cf6078842b3fe34dac" exitCode=143 Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.965238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerDied","Data":"4c0efee6b4d2a500b0e035e5bb28af5a6adf89f45b2b67f736a7d63b1ccbdb20"} Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.965263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerDied","Data":"a102b62364d264e275abf34cfeb0ad59ac4586f6d4a2a9cf6078842b3fe34dac"} Mar 10 19:10:07 crc kubenswrapper[4861]: I0310 19:10:07.986680 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.986661936 podStartE2EDuration="5.986661936s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:07.97995862 +0000 UTC m=+1351.743394590" watchObservedRunningTime="2026-03-10 19:10:07.986661936 +0000 UTC m=+1351.750097896" Mar 10 19:10:08 crc kubenswrapper[4861]: I0310 19:10:08.984727 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerID="afbae3dfa5e9fe5b9a6c440b9faa889a8f21c33d29a38bdb3f3701c6c0809668" exitCode=0 Mar 10 19:10:08 crc kubenswrapper[4861]: I0310 19:10:08.985047 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerID="0e2ac1c24ac49b6216bf3b991f7b14a2adb0dc552c1e50444d805d506c0951ac" exitCode=143 Mar 10 19:10:08 crc kubenswrapper[4861]: I0310 19:10:08.984789 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerDied","Data":"afbae3dfa5e9fe5b9a6c440b9faa889a8f21c33d29a38bdb3f3701c6c0809668"} Mar 10 19:10:08 crc kubenswrapper[4861]: I0310 19:10:08.985115 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerDied","Data":"0e2ac1c24ac49b6216bf3b991f7b14a2adb0dc552c1e50444d805d506c0951ac"} Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.552895 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.562798 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687676 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687839 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7nrz\" (UniqueName: \"kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687888 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687940 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.687979 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.688018 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.688054 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.688077 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689222 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whth2\" (UniqueName: \"kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689268 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data\") pod \"549d0570-3b33-4115-965c-155129434e7b\" (UID: \"549d0570-3b33-4115-965c-155129434e7b\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689354 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.689384 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data\") pod \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\" (UID: \"5686eba7-5925-4ca9-9530-5f5a152c0eb0\") " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.690661 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.694076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts" (OuterVolumeSpecName: "scripts") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.694643 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.690628 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs" (OuterVolumeSpecName: "logs") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.696130 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.696157 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.696401 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz" (OuterVolumeSpecName: "kube-api-access-v7nrz") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "kube-api-access-v7nrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.699918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts" (OuterVolumeSpecName: "scripts") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.705869 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2" (OuterVolumeSpecName: "kube-api-access-whth2") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "kube-api-access-whth2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.732458 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.744044 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.746425 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data" (OuterVolumeSpecName: "config-data") pod "5686eba7-5925-4ca9-9530-5f5a152c0eb0" (UID: "5686eba7-5925-4ca9-9530-5f5a152c0eb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.761563 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data" (OuterVolumeSpecName: "config-data") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.784798 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "549d0570-3b33-4115-965c-155129434e7b" (UID: "549d0570-3b33-4115-965c-155129434e7b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791846 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791871 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791880 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791890 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7nrz\" (UniqueName: \"kubernetes.io/projected/549d0570-3b33-4115-965c-155129434e7b-kube-api-access-v7nrz\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791899 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791907 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791916 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/549d0570-3b33-4115-965c-155129434e7b-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791939 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791947 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791955 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791963 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whth2\" (UniqueName: \"kubernetes.io/projected/5686eba7-5925-4ca9-9530-5f5a152c0eb0-kube-api-access-whth2\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791971 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5686eba7-5925-4ca9-9530-5f5a152c0eb0-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.791980 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/549d0570-3b33-4115-965c-155129434e7b-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.822453 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 10 19:10:09 crc kubenswrapper[4861]: I0310 19:10:09.893962 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.012580 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hf9nx" event={"ID":"5686eba7-5925-4ca9-9530-5f5a152c0eb0","Type":"ContainerDied","Data":"6caf801fea4f2b363c097797ba8c640dac9015213c155667798dd848f0871188"} Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.012766 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6caf801fea4f2b363c097797ba8c640dac9015213c155667798dd848f0871188" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.012989 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hf9nx" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.015553 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"549d0570-3b33-4115-965c-155129434e7b","Type":"ContainerDied","Data":"e68692424a58eb57128483db63e5326a4f5ce6653fd335e8f593e0dc8b44ef21"} Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.015748 4861 scope.go:117] "RemoveContainer" containerID="4c0efee6b4d2a500b0e035e5bb28af5a6adf89f45b2b67f736a7d63b1ccbdb20" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.015759 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.060564 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hf9nx"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.067178 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hf9nx"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.078559 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.088200 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.095495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:10 crc kubenswrapper[4861]: E0310 19:10:10.097729 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-httpd" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097745 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-httpd" Mar 10 19:10:10 crc kubenswrapper[4861]: E0310 19:10:10.097759 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9397cb-c907-4d9f-ae4c-3f05de941f27" containerName="oc" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097765 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9397cb-c907-4d9f-ae4c-3f05de941f27" containerName="oc" Mar 10 19:10:10 crc kubenswrapper[4861]: E0310 19:10:10.097776 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5686eba7-5925-4ca9-9530-5f5a152c0eb0" containerName="keystone-bootstrap" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097782 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5686eba7-5925-4ca9-9530-5f5a152c0eb0" containerName="keystone-bootstrap" Mar 10 19:10:10 crc kubenswrapper[4861]: E0310 19:10:10.097794 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8118e6-fec9-49af-aa6f-a36aaddebdb9" containerName="init" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097799 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8118e6-fec9-49af-aa6f-a36aaddebdb9" containerName="init" Mar 10 19:10:10 crc kubenswrapper[4861]: E0310 19:10:10.097817 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-log" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097823 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-log" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097975 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-log" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097987 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9397cb-c907-4d9f-ae4c-3f05de941f27" containerName="oc" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.097996 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5686eba7-5925-4ca9-9530-5f5a152c0eb0" containerName="keystone-bootstrap" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.098007 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8118e6-fec9-49af-aa6f-a36aaddebdb9" containerName="init" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.098015 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="549d0570-3b33-4115-965c-155129434e7b" containerName="glance-httpd" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.098957 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.102186 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.102250 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.110283 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.198402 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6tt6m"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.200716 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204101 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204220 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204400 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204437 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204566 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204624 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.204646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsh9k\" (UniqueName: \"kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.205255 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.205374 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xllsk" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.205254 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.205534 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.205543 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.214129 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6tt6m"] Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305398 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305467 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305509 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305528 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305548 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcxr\" (UniqueName: \"kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305571 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305588 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsh9k\" (UniqueName: \"kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305649 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305671 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305688 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305718 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.305740 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.307365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.307615 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.309014 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.310877 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.311144 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.313118 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.313635 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.325266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsh9k\" (UniqueName: \"kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.330633 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.407740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.407838 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.407925 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.408397 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcxr\" (UniqueName: \"kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.408509 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.408559 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.411284 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.411592 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.412018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.412274 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.414324 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.423076 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcxr\" (UniqueName: \"kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr\") pod \"keystone-bootstrap-6tt6m\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.480887 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.523353 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.969882 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549d0570-3b33-4115-965c-155129434e7b" path="/var/lib/kubelet/pods/549d0570-3b33-4115-965c-155129434e7b/volumes" Mar 10 19:10:10 crc kubenswrapper[4861]: I0310 19:10:10.971100 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5686eba7-5925-4ca9-9530-5f5a152c0eb0" path="/var/lib/kubelet/pods/5686eba7-5925-4ca9-9530-5f5a152c0eb0/volumes" Mar 10 19:10:12 crc kubenswrapper[4861]: I0310 19:10:12.916887 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:13 crc kubenswrapper[4861]: I0310 19:10:13.030835 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:10:13 crc kubenswrapper[4861]: I0310 19:10:13.031645 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" containerID="cri-o://ba81fb9f78de7159f39585f455b8f2a47d296bdae53d5967f444e268d0eb57e4" gracePeriod=10 Mar 10 19:10:14 crc kubenswrapper[4861]: I0310 19:10:14.056700 4861 generic.go:334] "Generic (PLEG): container finished" podID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerID="ba81fb9f78de7159f39585f455b8f2a47d296bdae53d5967f444e268d0eb57e4" exitCode=0 Mar 10 19:10:14 crc kubenswrapper[4861]: I0310 19:10:14.056750 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" event={"ID":"f3cf759d-7dce-473b-b790-ae9e344c2245","Type":"ContainerDied","Data":"ba81fb9f78de7159f39585f455b8f2a47d296bdae53d5967f444e268d0eb57e4"} Mar 10 19:10:14 crc kubenswrapper[4861]: I0310 19:10:14.745971 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Mar 10 19:10:15 crc kubenswrapper[4861]: I0310 19:10:15.973644 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056407 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056539 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056580 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056786 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vmm7\" (UniqueName: \"kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.056900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs\") pod \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\" (UID: \"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64\") " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.059549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.059823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs" (OuterVolumeSpecName: "logs") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.066293 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7" (OuterVolumeSpecName: "kube-api-access-7vmm7") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "kube-api-access-7vmm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.068832 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.069701 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts" (OuterVolumeSpecName: "scripts") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.084475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64","Type":"ContainerDied","Data":"6dee7d4d067dfe3c4d0bd3285d106d49d570a948243235aa3beb96022861c9a1"} Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.084573 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.092909 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.129474 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.134408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data" (OuterVolumeSpecName: "config-data") pod "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" (UID: "5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160736 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160765 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160775 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160805 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160816 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vmm7\" (UniqueName: \"kubernetes.io/projected/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-kube-api-access-7vmm7\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160825 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160833 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.160841 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.178672 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.262270 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.433519 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.443661 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.456111 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:16 crc kubenswrapper[4861]: E0310 19:10:16.456415 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-httpd" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.456433 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-httpd" Mar 10 19:10:16 crc kubenswrapper[4861]: E0310 19:10:16.456445 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-log" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.456452 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-log" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.456612 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-log" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.456641 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" containerName="glance-httpd" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.457424 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.463983 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.464174 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.468228 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.566633 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.566951 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfp2\" (UniqueName: \"kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.566991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.567034 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.567071 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.567111 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.567173 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.567199 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: E0310 19:10:16.592200 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0d0c3c_4090_4dcf_81cb_06d4d0a2cb64.slice/crio-6dee7d4d067dfe3c4d0bd3285d106d49d570a948243235aa3beb96022861c9a1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0d0c3c_4090_4dcf_81cb_06d4d0a2cb64.slice\": RecentStats: unable to find data in memory cache]" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.668441 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.668500 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfp2\" (UniqueName: \"kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.668559 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.668957 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.669147 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.669329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.669381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.669450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.669479 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.670288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.670458 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.683331 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.683365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.683732 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.684084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.686026 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfp2\" (UniqueName: \"kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.697642 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.772723 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:10:16 crc kubenswrapper[4861]: I0310 19:10:16.981539 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64" path="/var/lib/kubelet/pods/5d0d0c3c-4090-4dcf-81cb-06d4d0a2cb64/volumes" Mar 10 19:10:21 crc kubenswrapper[4861]: I0310 19:10:21.991776 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:10:21 crc kubenswrapper[4861]: I0310 19:10:21.992466 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:10:21 crc kubenswrapper[4861]: I0310 19:10:21.992523 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:10:21 crc kubenswrapper[4861]: I0310 19:10:21.993423 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:10:21 crc kubenswrapper[4861]: I0310 19:10:21.993503 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351" gracePeriod=600 Mar 10 19:10:22 crc kubenswrapper[4861]: I0310 19:10:22.148206 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351" exitCode=0 Mar 10 19:10:22 crc kubenswrapper[4861]: I0310 19:10:22.148288 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351"} Mar 10 19:10:24 crc kubenswrapper[4861]: E0310 19:10:24.676635 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:57dfeeb1cb430ed73e6db471592cfb1a5f25d3d5c083f82d4a676f936978be81" Mar 10 19:10:24 crc kubenswrapper[4861]: E0310 19:10:24.677070 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:57dfeeb1cb430ed73e6db471592cfb1a5f25d3d5c083f82d4a676f936978be81,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n587hf4h8h9ch97h657h5fh656hb9h5ffh654h679h5bh57ch584h7ch67h596h74hc8h65fhcdh598h668h55ch98h567h589h549h666hf6h677q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bd7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8b38f86-cca2-4d33-bcea-b93c80da6490): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.685970 4861 scope.go:117] "RemoveContainer" containerID="a102b62364d264e275abf34cfeb0ad59ac4586f6d4a2a9cf6078842b3fe34dac" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.746241 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: i/o timeout" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.812113 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.930988 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb\") pod \"f3cf759d-7dce-473b-b790-ae9e344c2245\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.931099 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc\") pod \"f3cf759d-7dce-473b-b790-ae9e344c2245\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.931137 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config\") pod \"f3cf759d-7dce-473b-b790-ae9e344c2245\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.931294 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgtkm\" (UniqueName: \"kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm\") pod \"f3cf759d-7dce-473b-b790-ae9e344c2245\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.931334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb\") pod \"f3cf759d-7dce-473b-b790-ae9e344c2245\" (UID: \"f3cf759d-7dce-473b-b790-ae9e344c2245\") " Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.945882 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm" (OuterVolumeSpecName: "kube-api-access-cgtkm") pod "f3cf759d-7dce-473b-b790-ae9e344c2245" (UID: "f3cf759d-7dce-473b-b790-ae9e344c2245"). InnerVolumeSpecName "kube-api-access-cgtkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.983962 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3cf759d-7dce-473b-b790-ae9e344c2245" (UID: "f3cf759d-7dce-473b-b790-ae9e344c2245"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.986231 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3cf759d-7dce-473b-b790-ae9e344c2245" (UID: "f3cf759d-7dce-473b-b790-ae9e344c2245"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.994169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3cf759d-7dce-473b-b790-ae9e344c2245" (UID: "f3cf759d-7dce-473b-b790-ae9e344c2245"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:24 crc kubenswrapper[4861]: I0310 19:10:24.998007 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config" (OuterVolumeSpecName: "config") pod "f3cf759d-7dce-473b-b790-ae9e344c2245" (UID: "f3cf759d-7dce-473b-b790-ae9e344c2245"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.034118 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.034143 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.034153 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.034162 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgtkm\" (UniqueName: \"kubernetes.io/projected/f3cf759d-7dce-473b-b790-ae9e344c2245-kube-api-access-cgtkm\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.034172 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3cf759d-7dce-473b-b790-ae9e344c2245-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.193619 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" event={"ID":"f3cf759d-7dce-473b-b790-ae9e344c2245","Type":"ContainerDied","Data":"e131c30fec362d60a9a318de68170014a5e1b763eaf17decb87081cd0e783c25"} Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.193731 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.233369 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:10:25 crc kubenswrapper[4861]: I0310 19:10:25.241322 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-bdz4g"] Mar 10 19:10:26 crc kubenswrapper[4861]: E0310 19:10:26.040080 4861 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" Mar 10 19:10:26 crc kubenswrapper[4861]: E0310 19:10:26.041049 4861 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfm6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2x6cv_openstack(6369ade3-a8af-44b7-94be-736523f99512): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.042136 4861 scope.go:117] "RemoveContainer" containerID="afbae3dfa5e9fe5b9a6c440b9faa889a8f21c33d29a38bdb3f3701c6c0809668" Mar 10 19:10:26 crc kubenswrapper[4861]: E0310 19:10:26.042312 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2x6cv" podUID="6369ade3-a8af-44b7-94be-736523f99512" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.180733 4861 scope.go:117] "RemoveContainer" containerID="0e2ac1c24ac49b6216bf3b991f7b14a2adb0dc552c1e50444d805d506c0951ac" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.212611 4861 scope.go:117] "RemoveContainer" containerID="ad21d8f400c7b7525b1502f8f827dc33dea8661c3e4157be937c7ca43fd01014" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.219825 4861 generic.go:334] "Generic (PLEG): container finished" podID="33d24034-d5c6-486f-a637-23724e8b5225" containerID="eed081b92c9466dcc219501d89f20fbe1648f53ae2509508a4d353b88a2bc628" exitCode=0 Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.219891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-snwft" event={"ID":"33d24034-d5c6-486f-a637-23724e8b5225","Type":"ContainerDied","Data":"eed081b92c9466dcc219501d89f20fbe1648f53ae2509508a4d353b88a2bc628"} Mar 10 19:10:26 crc kubenswrapper[4861]: E0310 19:10:26.239248 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838\\\"\"" pod="openstack/cinder-db-sync-2x6cv" podUID="6369ade3-a8af-44b7-94be-736523f99512" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.272545 4861 scope.go:117] "RemoveContainer" containerID="ba81fb9f78de7159f39585f455b8f2a47d296bdae53d5967f444e268d0eb57e4" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.292454 4861 scope.go:117] "RemoveContainer" containerID="5503e1e395d471251b6899856a4db0244076b1b7ddb519e38ef971737aa6ee17" Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.613275 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.620154 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6tt6m"] Mar 10 19:10:26 crc kubenswrapper[4861]: W0310 19:10:26.699609 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256b5814_23a7_4f27_8c86_544ec5290a5d.slice/crio-85b9d7e9853620343a5daef3f5bf4d32ae84808e9e27f0e886e858e57a7a352d WatchSource:0}: Error finding container 85b9d7e9853620343a5daef3f5bf4d32ae84808e9e27f0e886e858e57a7a352d: Status 404 returned error can't find the container with id 85b9d7e9853620343a5daef3f5bf4d32ae84808e9e27f0e886e858e57a7a352d Mar 10 19:10:26 crc kubenswrapper[4861]: I0310 19:10:26.969068 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" path="/var/lib/kubelet/pods/f3cf759d-7dce-473b-b790-ae9e344c2245/volumes" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.244625 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerStarted","Data":"85b9d7e9853620343a5daef3f5bf4d32ae84808e9e27f0e886e858e57a7a352d"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.247832 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerStarted","Data":"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.256467 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tt6m" event={"ID":"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd","Type":"ContainerStarted","Data":"f7918ba51ca439a75a3ced40d27c3fb45f087302e7b2a88425b3296924759c81"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.256623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tt6m" event={"ID":"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd","Type":"ContainerStarted","Data":"59969aed6a53c63c8c2c8d54072dce5542aba953f5468ac66d3eeb3423694dd8"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.272700 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.279696 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7g95t" event={"ID":"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1","Type":"ContainerStarted","Data":"058c0eed05761c6e3e065155019887bcb381e6a66a552252397daedb1770da87"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.282622 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6tt6m" podStartSLOduration=17.282607827 podStartE2EDuration="17.282607827s" podCreationTimestamp="2026-03-10 19:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:27.279694936 +0000 UTC m=+1371.043130886" watchObservedRunningTime="2026-03-10 19:10:27.282607827 +0000 UTC m=+1371.046043787" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.287578 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j5zv5" event={"ID":"7068adc7-5930-4999-99ec-eb8ced501cd2","Type":"ContainerStarted","Data":"9b2c6159ab007b38b614fd4c4579cf1344df9f222609b84392777278da63caa0"} Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.318956 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-j5zv5" podStartSLOduration=2.988767784 podStartE2EDuration="25.318922746s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="2026-03-10 19:10:03.726953757 +0000 UTC m=+1347.490389727" lastFinishedPulling="2026-03-10 19:10:26.057108719 +0000 UTC m=+1369.820544689" observedRunningTime="2026-03-10 19:10:27.317232059 +0000 UTC m=+1371.080668029" watchObservedRunningTime="2026-03-10 19:10:27.318922746 +0000 UTC m=+1371.082358726" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.342127 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7g95t" podStartSLOduration=4.01779234 podStartE2EDuration="25.342106711s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="2026-03-10 19:10:03.372614637 +0000 UTC m=+1347.136050597" lastFinishedPulling="2026-03-10 19:10:24.696928998 +0000 UTC m=+1368.460364968" observedRunningTime="2026-03-10 19:10:27.337753779 +0000 UTC m=+1371.101189739" watchObservedRunningTime="2026-03-10 19:10:27.342106711 +0000 UTC m=+1371.105542691" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.453065 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.548469 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.600261 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7sw\" (UniqueName: \"kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw\") pod \"33d24034-d5c6-486f-a637-23724e8b5225\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.600555 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config\") pod \"33d24034-d5c6-486f-a637-23724e8b5225\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.600779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle\") pod \"33d24034-d5c6-486f-a637-23724e8b5225\" (UID: \"33d24034-d5c6-486f-a637-23724e8b5225\") " Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.605947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw" (OuterVolumeSpecName: "kube-api-access-df7sw") pod "33d24034-d5c6-486f-a637-23724e8b5225" (UID: "33d24034-d5c6-486f-a637-23724e8b5225"). InnerVolumeSpecName "kube-api-access-df7sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.630978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33d24034-d5c6-486f-a637-23724e8b5225" (UID: "33d24034-d5c6-486f-a637-23724e8b5225"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.634799 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config" (OuterVolumeSpecName: "config") pod "33d24034-d5c6-486f-a637-23724e8b5225" (UID: "33d24034-d5c6-486f-a637-23724e8b5225"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.703558 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df7sw\" (UniqueName: \"kubernetes.io/projected/33d24034-d5c6-486f-a637-23724e8b5225-kube-api-access-df7sw\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.703594 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:27 crc kubenswrapper[4861]: I0310 19:10:27.703605 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d24034-d5c6-486f-a637-23724e8b5225-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.300797 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerStarted","Data":"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0"} Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.301205 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerStarted","Data":"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2"} Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.307593 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerStarted","Data":"eba307c015ebf3b6494ca54b629a1ceb28eb5186989480dc651d2c2ed68bfb9e"} Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.307651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerStarted","Data":"f21bfabba3e1f521f8d6b5e0d369f438b7bcd0afd970a0a2b7d05bc1438f0a38"} Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.311358 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-snwft" event={"ID":"33d24034-d5c6-486f-a637-23724e8b5225","Type":"ContainerDied","Data":"bfe94186227b0b9e8bc7a1cfe95e79e6ca9f193df9adf958d4c7dce4f1d7a079"} Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.311398 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe94186227b0b9e8bc7a1cfe95e79e6ca9f193df9adf958d4c7dce4f1d7a079" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.311474 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-snwft" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.325044 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.325022909 podStartE2EDuration="18.325022909s" podCreationTimestamp="2026-03-10 19:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:28.324087703 +0000 UTC m=+1372.087523673" watchObservedRunningTime="2026-03-10 19:10:28.325022909 +0000 UTC m=+1372.088458869" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.496090 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:28 crc kubenswrapper[4861]: E0310 19:10:28.497022 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="init" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.497041 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="init" Mar 10 19:10:28 crc kubenswrapper[4861]: E0310 19:10:28.497052 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d24034-d5c6-486f-a637-23724e8b5225" containerName="neutron-db-sync" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.497059 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d24034-d5c6-486f-a637-23724e8b5225" containerName="neutron-db-sync" Mar 10 19:10:28 crc kubenswrapper[4861]: E0310 19:10:28.497072 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.497079 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.497235 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.497251 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d24034-d5c6-486f-a637-23724e8b5225" containerName="neutron-db-sync" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.498146 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.518208 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.582556 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.583990 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.586482 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.590258 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-phrmf" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.590299 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.590457 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.594989 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638665 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638781 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638800 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56cr\" (UniqueName: \"kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638853 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtwn\" (UniqueName: \"kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638897 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638921 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638946 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.638986 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.639006 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.639071 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740525 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740570 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740594 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56cr\" (UniqueName: \"kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740627 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtwn\" (UniqueName: \"kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740659 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740679 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740720 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740765 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740810 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.740835 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.741811 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.742102 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.742127 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.742723 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.742780 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.746526 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.746596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.748292 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.757908 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtwn\" (UniqueName: \"kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.759589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56cr\" (UniqueName: \"kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr\") pod \"dnsmasq-dns-7859c7799c-v4js4\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.760083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config\") pod \"neutron-7b4d7bd5c6-xts5s\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.849920 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:28 crc kubenswrapper[4861]: I0310 19:10:28.924145 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:29 crc kubenswrapper[4861]: W0310 19:10:29.352653 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb62402_c4f6_49ff_b0cc_f669a86f906d.slice/crio-db5b628d95a1cdc3f02c89a2be00608e47394ac32301da96400960720606d6eb WatchSource:0}: Error finding container db5b628d95a1cdc3f02c89a2be00608e47394ac32301da96400960720606d6eb: Status 404 returned error can't find the container with id db5b628d95a1cdc3f02c89a2be00608e47394ac32301da96400960720606d6eb Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.353427 4861 generic.go:334] "Generic (PLEG): container finished" podID="6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" containerID="058c0eed05761c6e3e065155019887bcb381e6a66a552252397daedb1770da87" exitCode=0 Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.353487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7g95t" event={"ID":"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1","Type":"ContainerDied","Data":"058c0eed05761c6e3e065155019887bcb381e6a66a552252397daedb1770da87"} Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.358335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.359156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerStarted","Data":"8724e9f95caf76f2229af962d390f418057ba1ded4a3ff3dce212acea3927f0b"} Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.400698 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.400663284 podStartE2EDuration="13.400663284s" podCreationTimestamp="2026-03-10 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:29.38505674 +0000 UTC m=+1373.148492710" watchObservedRunningTime="2026-03-10 19:10:29.400663284 +0000 UTC m=+1373.164099244" Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.670691 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:10:29 crc kubenswrapper[4861]: I0310 19:10:29.747758 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-bdz4g" podUID="f3cf759d-7dce-473b-b790-ae9e344c2245" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: i/o timeout" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.367832 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" containerID="f7918ba51ca439a75a3ced40d27c3fb45f087302e7b2a88425b3296924759c81" exitCode=0 Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.367927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tt6m" event={"ID":"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd","Type":"ContainerDied","Data":"f7918ba51ca439a75a3ced40d27c3fb45f087302e7b2a88425b3296924759c81"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.374575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerStarted","Data":"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.374607 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerStarted","Data":"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.374616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerStarted","Data":"bd944e9749e0b31b1a6b3ec341d4edc3efd740775779c4803788aa2ba919701f"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.375151 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.379257 4861 generic.go:334] "Generic (PLEG): container finished" podID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerID="689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553" exitCode=0 Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.379327 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" event={"ID":"ddb62402-c4f6-49ff-b0cc-f669a86f906d","Type":"ContainerDied","Data":"689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.379357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" event={"ID":"ddb62402-c4f6-49ff-b0cc-f669a86f906d","Type":"ContainerStarted","Data":"db5b628d95a1cdc3f02c89a2be00608e47394ac32301da96400960720606d6eb"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.401366 4861 generic.go:334] "Generic (PLEG): container finished" podID="7068adc7-5930-4999-99ec-eb8ced501cd2" containerID="9b2c6159ab007b38b614fd4c4579cf1344df9f222609b84392777278da63caa0" exitCode=0 Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.401544 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j5zv5" event={"ID":"7068adc7-5930-4999-99ec-eb8ced501cd2","Type":"ContainerDied","Data":"9b2c6159ab007b38b614fd4c4579cf1344df9f222609b84392777278da63caa0"} Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.443572 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b4d7bd5c6-xts5s" podStartSLOduration=2.443547478 podStartE2EDuration="2.443547478s" podCreationTimestamp="2026-03-10 19:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:30.436857502 +0000 UTC m=+1374.200293462" watchObservedRunningTime="2026-03-10 19:10:30.443547478 +0000 UTC m=+1374.206983438" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.482280 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.482321 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.585650 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:30 crc kubenswrapper[4861]: I0310 19:10:30.592694 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.421348 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.421581 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.647656 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.659933 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.662132 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.669947 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.670212 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721018 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721081 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqpgv\" (UniqueName: \"kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721264 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721512 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.721598 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.822929 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823239 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823278 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqpgv\" (UniqueName: \"kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823353 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.823385 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.829694 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.829919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.830512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.831156 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.836698 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.839210 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.839345 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqpgv\" (UniqueName: \"kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv\") pod \"neutron-84bb44bb99-8g2vb\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:31 crc kubenswrapper[4861]: I0310 19:10:31.988524 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:34 crc kubenswrapper[4861]: I0310 19:10:34.195599 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:35 crc kubenswrapper[4861]: I0310 19:10:35.888160 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:35 crc kubenswrapper[4861]: I0310 19:10:35.913658 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:35 crc kubenswrapper[4861]: I0310 19:10:35.939663 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:35 crc kubenswrapper[4861]: I0310 19:10:35.968048 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115420 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data\") pod \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115584 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhcxr\" (UniqueName: \"kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115624 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle\") pod \"7068adc7-5930-4999-99ec-eb8ced501cd2\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115730 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115797 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data\") pod \"7068adc7-5930-4999-99ec-eb8ced501cd2\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115872 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwh2h\" (UniqueName: \"kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h\") pod \"7068adc7-5930-4999-99ec-eb8ced501cd2\" (UID: \"7068adc7-5930-4999-99ec-eb8ced501cd2\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115913 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts\") pod \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.115975 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twtwc\" (UniqueName: \"kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc\") pod \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.116010 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle\") pod \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.116051 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.116098 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.116124 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys\") pod \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\" (UID: \"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.116161 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs\") pod \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\" (UID: \"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1\") " Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.117001 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs" (OuterVolumeSpecName: "logs") pod "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" (UID: "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.127147 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc" (OuterVolumeSpecName: "kube-api-access-twtwc") pod "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" (UID: "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1"). InnerVolumeSpecName "kube-api-access-twtwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.127237 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7068adc7-5930-4999-99ec-eb8ced501cd2" (UID: "7068adc7-5930-4999-99ec-eb8ced501cd2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.128135 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr" (OuterVolumeSpecName: "kube-api-access-xhcxr") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "kube-api-access-xhcxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.132237 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.140431 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts" (OuterVolumeSpecName: "scripts") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.140466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h" (OuterVolumeSpecName: "kube-api-access-nwh2h") pod "7068adc7-5930-4999-99ec-eb8ced501cd2" (UID: "7068adc7-5930-4999-99ec-eb8ced501cd2"). InnerVolumeSpecName "kube-api-access-nwh2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.140553 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.140606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts" (OuterVolumeSpecName: "scripts") pod "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" (UID: "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.152050 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.155061 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data" (OuterVolumeSpecName: "config-data") pod "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" (UID: "ef81bd1a-d4db-4b18-a755-5fc31d09e4dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.165741 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" (UID: "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.166220 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data" (OuterVolumeSpecName: "config-data") pod "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" (UID: "6aa1372d-7e37-4d43-a96c-08fce6a5eaa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.170899 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7068adc7-5930-4999-99ec-eb8ced501cd2" (UID: "7068adc7-5930-4999-99ec-eb8ced501cd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218299 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218339 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218347 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7068adc7-5930-4999-99ec-eb8ced501cd2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218356 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwh2h\" (UniqueName: \"kubernetes.io/projected/7068adc7-5930-4999-99ec-eb8ced501cd2-kube-api-access-nwh2h\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218366 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218375 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218383 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twtwc\" (UniqueName: \"kubernetes.io/projected/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-kube-api-access-twtwc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218391 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218398 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218431 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218440 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218449 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218457 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.218467 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhcxr\" (UniqueName: \"kubernetes.io/projected/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd-kube-api-access-xhcxr\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.283950 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.469844 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j5zv5" event={"ID":"7068adc7-5930-4999-99ec-eb8ced501cd2","Type":"ContainerDied","Data":"33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.470080 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j5zv5" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.470088 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e5dbddd567946a0259b77e2d1f7b110a2f6e79bfaccca43db21d44e6ba2ef7" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.472056 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerStarted","Data":"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.476906 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tt6m" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.477148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tt6m" event={"ID":"ef81bd1a-d4db-4b18-a755-5fc31d09e4dd","Type":"ContainerDied","Data":"59969aed6a53c63c8c2c8d54072dce5542aba953f5468ac66d3eeb3423694dd8"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.477198 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59969aed6a53c63c8c2c8d54072dce5542aba953f5468ac66d3eeb3423694dd8" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.479924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerStarted","Data":"c1c8d15feff5e5cdb5fefa2d7b2105539922ecb219a45be426bb5a8c8841edb8"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.479985 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerStarted","Data":"cbdf0efdeae49fcc08a801b3cb1fdf7cd3226d9dc24a8de68fbdfe890799cbc4"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.484877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7g95t" event={"ID":"6aa1372d-7e37-4d43-a96c-08fce6a5eaa1","Type":"ContainerDied","Data":"f74d1b927d23a1c7bb74acb07899fc2d918a8bbdd8cb562982fe8632e0193636"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.484917 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74d1b927d23a1c7bb74acb07899fc2d918a8bbdd8cb562982fe8632e0193636" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.485047 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7g95t" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.488886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" event={"ID":"ddb62402-c4f6-49ff-b0cc-f669a86f906d","Type":"ContainerStarted","Data":"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc"} Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.489555 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.528222 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" podStartSLOduration=8.528200959 podStartE2EDuration="8.528200959s" podCreationTimestamp="2026-03-10 19:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:36.523242571 +0000 UTC m=+1380.286678561" watchObservedRunningTime="2026-03-10 19:10:36.528200959 +0000 UTC m=+1380.291636929" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.773186 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.773233 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.803182 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 10 19:10:36 crc kubenswrapper[4861]: I0310 19:10:36.826258 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.077625 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:10:37 crc kubenswrapper[4861]: E0310 19:10:37.078301 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" containerName="placement-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078317 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" containerName="placement-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: E0310 19:10:37.078332 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" containerName="keystone-bootstrap" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078339 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" containerName="keystone-bootstrap" Mar 10 19:10:37 crc kubenswrapper[4861]: E0310 19:10:37.078369 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7068adc7-5930-4999-99ec-eb8ced501cd2" containerName="barbican-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078377 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7068adc7-5930-4999-99ec-eb8ced501cd2" containerName="barbican-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078562 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" containerName="keystone-bootstrap" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078607 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" containerName="placement-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.078625 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7068adc7-5930-4999-99ec-eb8ced501cd2" containerName="barbican-db-sync" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.079369 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.082969 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xllsk" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.083118 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.083459 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.084975 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.086286 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.088276 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.088874 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.088979 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.090037 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.090270 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.090408 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.090568 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.090605 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4kqnb" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.113513 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.171948 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245463 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245523 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245574 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245601 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245628 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245680 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245701 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245752 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245838 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245878 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgl6\" (UniqueName: \"kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245910 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.245987 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.246008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7zn\" (UniqueName: \"kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.246032 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.261031 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.262642 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.274782 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.275030 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-r8n4d" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.276507 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.286829 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.288283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.298397 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.318413 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.344208 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347726 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347774 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw7zn\" (UniqueName: \"kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347831 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347892 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347905 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347925 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347940 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347972 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.347991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348011 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348032 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348054 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348073 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348091 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348113 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgl6\" (UniqueName: \"kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348129 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.348165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbqn\" (UniqueName: \"kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.359670 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.361327 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.363278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.366910 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.369405 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.374951 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.379074 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.381017 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.382435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.384502 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.393229 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.393488 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.406079 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.407314 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw7zn\" (UniqueName: \"kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn\") pod \"placement-7f85b8cc7d-lblq8\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.407787 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgl6\" (UniqueName: \"kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6\") pod \"keystone-75ff4ff987-k4jks\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.413002 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.424769 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.438469 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.447528 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.449135 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450298 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450338 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450373 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450414 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450473 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450522 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhl5\" (UniqueName: \"kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450573 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.450592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbqn\" (UniqueName: \"kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.451239 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.462495 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.484813 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.486693 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.490038 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.511315 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbqn\" (UniqueName: \"kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn\") pod \"barbican-worker-745b4575bf-n9gzs\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.544565 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551792 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551880 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhl5\" (UniqueName: \"kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551932 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551959 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.551991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552041 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552075 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsndl\" (UniqueName: \"kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552157 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerStarted","Data":"a3b089250c795f2618f460b04f1cfb948fb4c25d02b633b7c39764754740cb3c"} Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552216 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552227 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.552315 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.554090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.554781 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.573944 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.574293 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.589907 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhl5\" (UniqueName: \"kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.596729 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.600341 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom\") pod \"barbican-keystone-listener-78dd995c5d-2ppbp\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.615239 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.616067 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.639765 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.641205 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.653186 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.653235 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.653317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.653346 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v742p\" (UniqueName: \"kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.653392 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654139 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654168 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsndl\" (UniqueName: \"kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654194 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654288 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.654324 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.655622 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.668824 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.669666 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.669923 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.670293 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.670965 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.708755 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.710365 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.729750 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsndl\" (UniqueName: \"kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl\") pod \"dnsmasq-dns-8449d68f4f-wp6ch\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765631 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v742p\" (UniqueName: \"kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765735 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpltx\" (UniqueName: \"kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765760 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765781 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765805 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6j4d\" (UniqueName: \"kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765876 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765895 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.765987 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.766020 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.766815 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.802106 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.847105 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84bb44bb99-8g2vb" podStartSLOduration=6.847079182 podStartE2EDuration="6.847079182s" podCreationTimestamp="2026-03-10 19:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:37.57022674 +0000 UTC m=+1381.333662710" watchObservedRunningTime="2026-03-10 19:10:37.847079182 +0000 UTC m=+1381.610515142" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.864338 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.865721 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.870684 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872292 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872375 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872449 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872483 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpltx\" (UniqueName: \"kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872510 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872554 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6j4d\" (UniqueName: \"kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872569 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872585 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.872605 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.873121 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.873836 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.874485 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v742p\" (UniqueName: \"kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p\") pod \"barbican-api-6859597b94-bpqx5\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.883475 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.884677 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.885692 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.886494 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.901319 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.920826 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.927992 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.928812 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.929210 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.940265 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6j4d\" (UniqueName: \"kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d\") pod \"barbican-keystone-listener-7d66d9f78-7w6cc\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.941325 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpltx\" (UniqueName: \"kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx\") pod \"barbican-worker-7d79dbd48c-d74zl\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.957834 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.966013 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.970094 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974224 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974277 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974305 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfv6\" (UniqueName: \"kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974401 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:37 crc kubenswrapper[4861]: I0310 19:10:37.974436 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.013874 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.059148 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.079875 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081160 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081230 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081334 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081429 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfv6\" (UniqueName: \"kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081617 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081823 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.081908 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffwzz\" (UniqueName: \"kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.082001 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.082422 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.083461 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.086364 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.092242 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.100140 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.112139 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.112392 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.128470 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfv6\" (UniqueName: \"kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6\") pod \"placement-64c5bb5d74-nmtmm\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.129116 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.164096 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.188873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.188952 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.189091 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.189111 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffwzz\" (UniqueName: \"kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.189147 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.190903 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.198280 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.199291 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.201933 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.268737 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffwzz\" (UniqueName: \"kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz\") pod \"barbican-api-7d74d76988-ck77x\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.479701 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.479996 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.571894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ff4ff987-k4jks" event={"ID":"fb082653-4ce1-4696-b6fb-e6af12109812","Type":"ContainerStarted","Data":"5d3daf0ab59775482559c39acb798c5253548a77d6f5e862ff7ae676a64f9ff5"} Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.572001 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="dnsmasq-dns" containerID="cri-o://e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc" gracePeriod=10 Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.676296 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:10:38 crc kubenswrapper[4861]: W0310 19:10:38.975030 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf47f4213_5802_4231_a1ac_f826c52e6434.slice/crio-2e0437b931c3ce3974e4ebe88dea2b1df05c9cd1bcff6d608deb84f78dd274d3 WatchSource:0}: Error finding container 2e0437b931c3ce3974e4ebe88dea2b1df05c9cd1bcff6d608deb84f78dd274d3: Status 404 returned error can't find the container with id 2e0437b931c3ce3974e4ebe88dea2b1df05c9cd1bcff6d608deb84f78dd274d3 Mar 10 19:10:38 crc kubenswrapper[4861]: I0310 19:10:38.988924 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.005897 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.021641 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:39 crc kubenswrapper[4861]: W0310 19:10:39.025124 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod821473ca_1fe9_4299_b5ff_d2a202fee1cc.slice/crio-095f51f3c4fc85b315bb7b6b2c24ec49e65b5beb4f2b9c72b64f48d6286887b4 WatchSource:0}: Error finding container 095f51f3c4fc85b315bb7b6b2c24ec49e65b5beb4f2b9c72b64f48d6286887b4: Status 404 returned error can't find the container with id 095f51f3c4fc85b315bb7b6b2c24ec49e65b5beb4f2b9c72b64f48d6286887b4 Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.224309 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.256671 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.279357 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.285434 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.298721 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.349785 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441362 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441448 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441477 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441540 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441563 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56cr\" (UniqueName: \"kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.441592 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc\") pod \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\" (UID: \"ddb62402-c4f6-49ff-b0cc-f669a86f906d\") " Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.479871 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr" (OuterVolumeSpecName: "kube-api-access-r56cr") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "kube-api-access-r56cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.543946 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56cr\" (UniqueName: \"kubernetes.io/projected/ddb62402-c4f6-49ff-b0cc-f669a86f906d-kube-api-access-r56cr\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.602038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerStarted","Data":"22ce4f92df7673fda01181cdd99827f4e1ad03ead9567a795df8eba2ced61d63"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.604327 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerStarted","Data":"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.604378 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerStarted","Data":"331245e2e3c0b3e4cb3850eea7daaf47e042acfbd4e39a58f2a3945658ae6834"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.606770 4861 generic.go:334] "Generic (PLEG): container finished" podID="f47f4213-5802-4231-a1ac-f826c52e6434" containerID="65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4" exitCode=0 Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.606823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" event={"ID":"f47f4213-5802-4231-a1ac-f826c52e6434","Type":"ContainerDied","Data":"65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.606839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" event={"ID":"f47f4213-5802-4231-a1ac-f826c52e6434","Type":"ContainerStarted","Data":"2e0437b931c3ce3974e4ebe88dea2b1df05c9cd1bcff6d608deb84f78dd274d3"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.609596 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerStarted","Data":"6a0ceeb788c3ac7151a6e3e4f6fbf645e0415de00f195b310021115abde69495"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.613096 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerStarted","Data":"1c412a58ce00ee085bd86380d453656709c52d05ec9b8481474604be4d509245"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.614816 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ff4ff987-k4jks" event={"ID":"fb082653-4ce1-4696-b6fb-e6af12109812","Type":"ContainerStarted","Data":"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.614910 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.617017 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerStarted","Data":"095f51f3c4fc85b315bb7b6b2c24ec49e65b5beb4f2b9c72b64f48d6286887b4"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.619570 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerStarted","Data":"e1f2923967f53ea2adfa0fc7c05053bee8c0e896bd71cbe994c4052a146c4e68"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.621546 4861 generic.go:334] "Generic (PLEG): container finished" podID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerID="e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc" exitCode=0 Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.621607 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" event={"ID":"ddb62402-c4f6-49ff-b0cc-f669a86f906d","Type":"ContainerDied","Data":"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.621624 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" event={"ID":"ddb62402-c4f6-49ff-b0cc-f669a86f906d","Type":"ContainerDied","Data":"db5b628d95a1cdc3f02c89a2be00608e47394ac32301da96400960720606d6eb"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.621639 4861 scope.go:117] "RemoveContainer" containerID="e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.621832 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-v4js4" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.632509 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerStarted","Data":"3c4835bc7fa99120f1d9a9c1c3388848a1d6b09cfcf3007463ca41c1efd036af"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.649317 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config" (OuterVolumeSpecName: "config") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.667918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.675304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.679406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.681194 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.681218 4861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.681975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerStarted","Data":"64e9ce47a0560f1ca8e8e249135ed7ec90f15608963f6de1fa05e5a5119e978e"} Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.687676 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75ff4ff987-k4jks" podStartSLOduration=2.687655969 podStartE2EDuration="2.687655969s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:39.670804871 +0000 UTC m=+1383.434240841" watchObservedRunningTime="2026-03-10 19:10:39.687655969 +0000 UTC m=+1383.451091929" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.712685 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ddb62402-c4f6-49ff-b0cc-f669a86f906d" (UID: "ddb62402-c4f6-49ff-b0cc-f669a86f906d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.747961 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.748000 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.748331 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.748343 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.748352 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddb62402-c4f6-49ff-b0cc-f669a86f906d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.854457 4861 scope.go:117] "RemoveContainer" containerID="689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.912661 4861 scope.go:117] "RemoveContainer" containerID="e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc" Mar 10 19:10:39 crc kubenswrapper[4861]: E0310 19:10:39.913152 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc\": container with ID starting with e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc not found: ID does not exist" containerID="e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.913182 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc"} err="failed to get container status \"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc\": rpc error: code = NotFound desc = could not find container \"e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc\": container with ID starting with e3bbd076ef00f24b93b5605c9aa6033542494900db16c64fb3aee9827cabdacc not found: ID does not exist" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.913200 4861 scope.go:117] "RemoveContainer" containerID="689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553" Mar 10 19:10:39 crc kubenswrapper[4861]: E0310 19:10:39.913369 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553\": container with ID starting with 689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553 not found: ID does not exist" containerID="689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553" Mar 10 19:10:39 crc kubenswrapper[4861]: I0310 19:10:39.913387 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553"} err="failed to get container status \"689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553\": rpc error: code = NotFound desc = could not find container \"689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553\": container with ID starting with 689a41d4dd9d7db44a63648789b7d3c9db11e43686405b1ad5b3dd8cb4459553 not found: ID does not exist" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.016750 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.024066 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-v4js4"] Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.374464 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.550357 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.710622 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerStarted","Data":"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.710948 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.710975 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.715129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerStarted","Data":"ad3ae82d4dbe854427fcf01d3a73ec902807e9cda4f1ff64c48711716b2cdf24"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.715178 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerStarted","Data":"10a0614cb3d2e9f3cc63de42d9e0267ef3020f28f77569843470867d1ee93a7e"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.715670 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.715726 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.733768 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f85b8cc7d-lblq8" podStartSLOduration=3.733749663 podStartE2EDuration="3.733749663s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:40.731063278 +0000 UTC m=+1384.494499248" watchObservedRunningTime="2026-03-10 19:10:40.733749663 +0000 UTC m=+1384.497185623" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.738638 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" event={"ID":"f47f4213-5802-4231-a1ac-f826c52e6434","Type":"ContainerStarted","Data":"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.738678 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.766896 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d74d76988-ck77x" podStartSLOduration=3.766877603 podStartE2EDuration="3.766877603s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:40.765906497 +0000 UTC m=+1384.529342457" watchObservedRunningTime="2026-03-10 19:10:40.766877603 +0000 UTC m=+1384.530313563" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.781262 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerStarted","Data":"2c4a20a6926b2b45109714063a6825d1bc01f762161802e7e326bfbe8fdd61bd"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.781305 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerStarted","Data":"77889d8b335ed9abb93b14387aed804af4a196f608a103719669d146103818c8"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.781342 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.781365 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.789697 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" podStartSLOduration=3.789678507 podStartE2EDuration="3.789678507s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:40.785926643 +0000 UTC m=+1384.549362623" watchObservedRunningTime="2026-03-10 19:10:40.789678507 +0000 UTC m=+1384.553114467" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.814136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerStarted","Data":"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.814199 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerStarted","Data":"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87"} Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.814247 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.814273 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.821228 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6859597b94-bpqx5" podStartSLOduration=3.821209102 podStartE2EDuration="3.821209102s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:40.803352277 +0000 UTC m=+1384.566788247" watchObservedRunningTime="2026-03-10 19:10:40.821209102 +0000 UTC m=+1384.584645062" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.836617 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64c5bb5d74-nmtmm" podStartSLOduration=3.83659808 podStartE2EDuration="3.83659808s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:40.830522602 +0000 UTC m=+1384.593958562" watchObservedRunningTime="2026-03-10 19:10:40.83659808 +0000 UTC m=+1384.600034040" Mar 10 19:10:40 crc kubenswrapper[4861]: I0310 19:10:40.972567 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" path="/var/lib/kubelet/pods/ddb62402-c4f6-49ff-b0cc-f669a86f906d/volumes" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.273746 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.313668 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:10:41 crc kubenswrapper[4861]: E0310 19:10:41.314030 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="dnsmasq-dns" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.314049 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="dnsmasq-dns" Mar 10 19:10:41 crc kubenswrapper[4861]: E0310 19:10:41.314062 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="init" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.314068 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="init" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.314241 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb62402-c4f6-49ff-b0cc-f669a86f906d" containerName="dnsmasq-dns" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.315107 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.321943 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.322157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.342282 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402665 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402762 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402823 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402847 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402866 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72f42\" (UniqueName: \"kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.402956 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.403116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.506311 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.506880 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.506915 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.506940 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72f42\" (UniqueName: \"kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.506962 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.507004 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.507074 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.508034 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.512472 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.512974 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.513544 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.514362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.522099 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.524796 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72f42\" (UniqueName: \"kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42\") pod \"barbican-api-5d84cf8948-mg4jb\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:41 crc kubenswrapper[4861]: I0310 19:10:41.659100 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.447633 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.831112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerStarted","Data":"53f40f65dbf2b5342c07708fab0b8ba756e94c83e29d6a3d7dcbfe416c979714"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.831383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerStarted","Data":"8655e221eb11bf63a70e86c66d6744578123fc680e32596fe2585f10c0dc6ce4"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.836348 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerStarted","Data":"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.836392 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerStarted","Data":"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.838345 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerStarted","Data":"93c0c7d20bdf5ba3484b74d8194ad89a2b4dad5778c0dc5d0876a0f04995cc86"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.838368 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerStarted","Data":"4b1f746cd725920aeee99b23e4c08bf7d8c523580d3e57266ad014c8ed8e2ed0"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.838380 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerStarted","Data":"61337d83a0befc3df80253c2d54f15f756dbf9bc38f9e6b7beddf7d2c9e3e4d6"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.838809 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.838836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.839874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerStarted","Data":"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.839898 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerStarted","Data":"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.840915 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerStarted","Data":"a1c9bbb6ac6b18aa6ab64a362d786f1181aaa20d69c535d2fc669c95e5b0894a"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.840938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerStarted","Data":"0bd467585ccc535438602e8f3515caae4b4a28b7a280309c9eb8b270c834e65d"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.842541 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6859597b94-bpqx5" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api-log" containerID="cri-o://77889d8b335ed9abb93b14387aed804af4a196f608a103719669d146103818c8" gracePeriod=30 Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.842788 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2x6cv" event={"ID":"6369ade3-a8af-44b7-94be-736523f99512","Type":"ContainerStarted","Data":"87fb93e566a663de0b618099d7590f0336c0350139f6d4b367758d4d661aeb71"} Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.843572 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6859597b94-bpqx5" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api" containerID="cri-o://2c4a20a6926b2b45109714063a6825d1bc01f762161802e7e326bfbe8fdd61bd" gracePeriod=30 Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.869162 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-745b4575bf-n9gzs" podStartSLOduration=2.915243453 podStartE2EDuration="5.869144791s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="2026-03-10 19:10:39.024658269 +0000 UTC m=+1382.788094229" lastFinishedPulling="2026-03-10 19:10:41.978559607 +0000 UTC m=+1385.741995567" observedRunningTime="2026-03-10 19:10:42.851059789 +0000 UTC m=+1386.614495749" watchObservedRunningTime="2026-03-10 19:10:42.869144791 +0000 UTC m=+1386.632580741" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.869251 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" podStartSLOduration=3.237862396 podStartE2EDuration="5.869247503s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="2026-03-10 19:10:39.346750338 +0000 UTC m=+1383.110186298" lastFinishedPulling="2026-03-10 19:10:41.978135445 +0000 UTC m=+1385.741571405" observedRunningTime="2026-03-10 19:10:42.866446646 +0000 UTC m=+1386.629882606" watchObservedRunningTime="2026-03-10 19:10:42.869247503 +0000 UTC m=+1386.632683463" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.884769 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2x6cv" podStartSLOduration=2.435798719 podStartE2EDuration="40.884753175s" podCreationTimestamp="2026-03-10 19:10:02 +0000 UTC" firstStartedPulling="2026-03-10 19:10:03.529645892 +0000 UTC m=+1347.293081852" lastFinishedPulling="2026-03-10 19:10:41.978600348 +0000 UTC m=+1385.742036308" observedRunningTime="2026-03-10 19:10:42.883276393 +0000 UTC m=+1386.646712353" watchObservedRunningTime="2026-03-10 19:10:42.884753175 +0000 UTC m=+1386.648189135" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.909164 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d84cf8948-mg4jb" podStartSLOduration=1.909147333 podStartE2EDuration="1.909147333s" podCreationTimestamp="2026-03-10 19:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:42.900182933 +0000 UTC m=+1386.663618893" watchObservedRunningTime="2026-03-10 19:10:42.909147333 +0000 UTC m=+1386.672583293" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.910756 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.919179 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" podStartSLOduration=2.967942045 podStartE2EDuration="5.91916609s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="2026-03-10 19:10:39.027870288 +0000 UTC m=+1382.791306248" lastFinishedPulling="2026-03-10 19:10:41.979094333 +0000 UTC m=+1385.742530293" observedRunningTime="2026-03-10 19:10:42.917947537 +0000 UTC m=+1386.681383497" watchObservedRunningTime="2026-03-10 19:10:42.91916609 +0000 UTC m=+1386.682602050" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.944117 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d79dbd48c-d74zl" podStartSLOduration=3.330722895 podStartE2EDuration="5.944100193s" podCreationTimestamp="2026-03-10 19:10:37 +0000 UTC" firstStartedPulling="2026-03-10 19:10:39.363176764 +0000 UTC m=+1383.126612724" lastFinishedPulling="2026-03-10 19:10:41.976554062 +0000 UTC m=+1385.739990022" observedRunningTime="2026-03-10 19:10:42.938278042 +0000 UTC m=+1386.701714022" watchObservedRunningTime="2026-03-10 19:10:42.944100193 +0000 UTC m=+1386.707536153" Mar 10 19:10:42 crc kubenswrapper[4861]: I0310 19:10:42.989470 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:43 crc kubenswrapper[4861]: I0310 19:10:43.867374 4861 generic.go:334] "Generic (PLEG): container finished" podID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerID="2c4a20a6926b2b45109714063a6825d1bc01f762161802e7e326bfbe8fdd61bd" exitCode=0 Mar 10 19:10:43 crc kubenswrapper[4861]: I0310 19:10:43.867624 4861 generic.go:334] "Generic (PLEG): container finished" podID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerID="77889d8b335ed9abb93b14387aed804af4a196f608a103719669d146103818c8" exitCode=143 Mar 10 19:10:43 crc kubenswrapper[4861]: I0310 19:10:43.867418 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerDied","Data":"2c4a20a6926b2b45109714063a6825d1bc01f762161802e7e326bfbe8fdd61bd"} Mar 10 19:10:43 crc kubenswrapper[4861]: I0310 19:10:43.867808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerDied","Data":"77889d8b335ed9abb93b14387aed804af4a196f608a103719669d146103818c8"} Mar 10 19:10:44 crc kubenswrapper[4861]: I0310 19:10:44.880049 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-745b4575bf-n9gzs" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker-log" containerID="cri-o://8655e221eb11bf63a70e86c66d6744578123fc680e32596fe2585f10c0dc6ce4" gracePeriod=30 Mar 10 19:10:44 crc kubenswrapper[4861]: I0310 19:10:44.880118 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-745b4575bf-n9gzs" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker" containerID="cri-o://53f40f65dbf2b5342c07708fab0b8ba756e94c83e29d6a3d7dcbfe416c979714" gracePeriod=30 Mar 10 19:10:44 crc kubenswrapper[4861]: I0310 19:10:44.880326 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener-log" containerID="cri-o://0bd467585ccc535438602e8f3515caae4b4a28b7a280309c9eb8b270c834e65d" gracePeriod=30 Mar 10 19:10:44 crc kubenswrapper[4861]: I0310 19:10:44.880439 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener" containerID="cri-o://a1c9bbb6ac6b18aa6ab64a362d786f1181aaa20d69c535d2fc669c95e5b0894a" gracePeriod=30 Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.893727 4861 generic.go:334] "Generic (PLEG): container finished" podID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerID="a1c9bbb6ac6b18aa6ab64a362d786f1181aaa20d69c535d2fc669c95e5b0894a" exitCode=0 Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.894031 4861 generic.go:334] "Generic (PLEG): container finished" podID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerID="0bd467585ccc535438602e8f3515caae4b4a28b7a280309c9eb8b270c834e65d" exitCode=143 Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.893743 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerDied","Data":"a1c9bbb6ac6b18aa6ab64a362d786f1181aaa20d69c535d2fc669c95e5b0894a"} Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.894110 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerDied","Data":"0bd467585ccc535438602e8f3515caae4b4a28b7a280309c9eb8b270c834e65d"} Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.898410 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerID="53f40f65dbf2b5342c07708fab0b8ba756e94c83e29d6a3d7dcbfe416c979714" exitCode=0 Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.898437 4861 generic.go:334] "Generic (PLEG): container finished" podID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerID="8655e221eb11bf63a70e86c66d6744578123fc680e32596fe2585f10c0dc6ce4" exitCode=143 Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.898455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerDied","Data":"53f40f65dbf2b5342c07708fab0b8ba756e94c83e29d6a3d7dcbfe416c979714"} Mar 10 19:10:45 crc kubenswrapper[4861]: I0310 19:10:45.898475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerDied","Data":"8655e221eb11bf63a70e86c66d6744578123fc680e32596fe2585f10c0dc6ce4"} Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.747880 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.754149 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820280 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data\") pod \"d1d0fe2f-b350-4bcd-8f3b-309092093033\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820341 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle\") pod \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820416 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom\") pod \"d1d0fe2f-b350-4bcd-8f3b-309092093033\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820444 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmbqn\" (UniqueName: \"kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn\") pod \"d1d0fe2f-b350-4bcd-8f3b-309092093033\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs\") pod \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820521 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs\") pod \"d1d0fe2f-b350-4bcd-8f3b-309092093033\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820544 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data\") pod \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbhl5\" (UniqueName: \"kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5\") pod \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom\") pod \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\" (UID: \"821473ca-1fe9-4299-b5ff-d2a202fee1cc\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.820757 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle\") pod \"d1d0fe2f-b350-4bcd-8f3b-309092093033\" (UID: \"d1d0fe2f-b350-4bcd-8f3b-309092093033\") " Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.822221 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs" (OuterVolumeSpecName: "logs") pod "d1d0fe2f-b350-4bcd-8f3b-309092093033" (UID: "d1d0fe2f-b350-4bcd-8f3b-309092093033"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.822490 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs" (OuterVolumeSpecName: "logs") pod "821473ca-1fe9-4299-b5ff-d2a202fee1cc" (UID: "821473ca-1fe9-4299-b5ff-d2a202fee1cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.853818 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5" (OuterVolumeSpecName: "kube-api-access-dbhl5") pod "821473ca-1fe9-4299-b5ff-d2a202fee1cc" (UID: "821473ca-1fe9-4299-b5ff-d2a202fee1cc"). InnerVolumeSpecName "kube-api-access-dbhl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.853880 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1d0fe2f-b350-4bcd-8f3b-309092093033" (UID: "d1d0fe2f-b350-4bcd-8f3b-309092093033"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.855781 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn" (OuterVolumeSpecName: "kube-api-access-xmbqn") pod "d1d0fe2f-b350-4bcd-8f3b-309092093033" (UID: "d1d0fe2f-b350-4bcd-8f3b-309092093033"). InnerVolumeSpecName "kube-api-access-xmbqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.857511 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "821473ca-1fe9-4299-b5ff-d2a202fee1cc" (UID: "821473ca-1fe9-4299-b5ff-d2a202fee1cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.858789 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d0fe2f-b350-4bcd-8f3b-309092093033" (UID: "d1d0fe2f-b350-4bcd-8f3b-309092093033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.878400 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "821473ca-1fe9-4299-b5ff-d2a202fee1cc" (UID: "821473ca-1fe9-4299-b5ff-d2a202fee1cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.899805 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data" (OuterVolumeSpecName: "config-data") pod "821473ca-1fe9-4299-b5ff-d2a202fee1cc" (UID: "821473ca-1fe9-4299-b5ff-d2a202fee1cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.900678 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data" (OuterVolumeSpecName: "config-data") pod "d1d0fe2f-b350-4bcd-8f3b-309092093033" (UID: "d1d0fe2f-b350-4bcd-8f3b-309092093033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.909875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" event={"ID":"821473ca-1fe9-4299-b5ff-d2a202fee1cc","Type":"ContainerDied","Data":"095f51f3c4fc85b315bb7b6b2c24ec49e65b5beb4f2b9c72b64f48d6286887b4"} Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.909906 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78dd995c5d-2ppbp" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.909934 4861 scope.go:117] "RemoveContainer" containerID="a1c9bbb6ac6b18aa6ab64a362d786f1181aaa20d69c535d2fc669c95e5b0894a" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922869 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmbqn\" (UniqueName: \"kubernetes.io/projected/d1d0fe2f-b350-4bcd-8f3b-309092093033-kube-api-access-xmbqn\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922903 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821473ca-1fe9-4299-b5ff-d2a202fee1cc-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922916 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d0fe2f-b350-4bcd-8f3b-309092093033-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922927 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922938 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbhl5\" (UniqueName: \"kubernetes.io/projected/821473ca-1fe9-4299-b5ff-d2a202fee1cc-kube-api-access-dbhl5\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922949 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922962 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922972 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922983 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821473ca-1fe9-4299-b5ff-d2a202fee1cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.922993 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d0fe2f-b350-4bcd-8f3b-309092093033-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.930109 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-745b4575bf-n9gzs" event={"ID":"d1d0fe2f-b350-4bcd-8f3b-309092093033","Type":"ContainerDied","Data":"22ce4f92df7673fda01181cdd99827f4e1ad03ead9567a795df8eba2ced61d63"} Mar 10 19:10:46 crc kubenswrapper[4861]: I0310 19:10:46.930189 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-745b4575bf-n9gzs" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.008038 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.017661 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-78dd995c5d-2ppbp"] Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.026583 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.033902 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-745b4575bf-n9gzs"] Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.350414 4861 scope.go:117] "RemoveContainer" containerID="0bd467585ccc535438602e8f3515caae4b4a28b7a280309c9eb8b270c834e65d" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.400243 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434044 4861 scope.go:117] "RemoveContainer" containerID="53f40f65dbf2b5342c07708fab0b8ba756e94c83e29d6a3d7dcbfe416c979714" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434327 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v742p\" (UniqueName: \"kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p\") pod \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434403 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data\") pod \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434499 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle\") pod \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434567 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom\") pod \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.434749 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs\") pod \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\" (UID: \"28b3646a-3fb7-489b-b91f-fe3a0260c5c7\") " Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.435501 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs" (OuterVolumeSpecName: "logs") pod "28b3646a-3fb7-489b-b91f-fe3a0260c5c7" (UID: "28b3646a-3fb7-489b-b91f-fe3a0260c5c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.442016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p" (OuterVolumeSpecName: "kube-api-access-v742p") pod "28b3646a-3fb7-489b-b91f-fe3a0260c5c7" (UID: "28b3646a-3fb7-489b-b91f-fe3a0260c5c7"). InnerVolumeSpecName "kube-api-access-v742p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.443774 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28b3646a-3fb7-489b-b91f-fe3a0260c5c7" (UID: "28b3646a-3fb7-489b-b91f-fe3a0260c5c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.471563 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28b3646a-3fb7-489b-b91f-fe3a0260c5c7" (UID: "28b3646a-3fb7-489b-b91f-fe3a0260c5c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.522002 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data" (OuterVolumeSpecName: "config-data") pod "28b3646a-3fb7-489b-b91f-fe3a0260c5c7" (UID: "28b3646a-3fb7-489b-b91f-fe3a0260c5c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.536902 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.536925 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.536937 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v742p\" (UniqueName: \"kubernetes.io/projected/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-kube-api-access-v742p\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.536948 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.536956 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b3646a-3fb7-489b-b91f-fe3a0260c5c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.546161 4861 scope.go:117] "RemoveContainer" containerID="8655e221eb11bf63a70e86c66d6744578123fc680e32596fe2585f10c0dc6ce4" Mar 10 19:10:47 crc kubenswrapper[4861]: E0310 19:10:47.649387 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.953872 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6859597b94-bpqx5" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.954038 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6859597b94-bpqx5" event={"ID":"28b3646a-3fb7-489b-b91f-fe3a0260c5c7","Type":"ContainerDied","Data":"6a0ceeb788c3ac7151a6e3e4f6fbf645e0415de00f195b310021115abde69495"} Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.954093 4861 scope.go:117] "RemoveContainer" containerID="2c4a20a6926b2b45109714063a6825d1bc01f762161802e7e326bfbe8fdd61bd" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.969028 4861 generic.go:334] "Generic (PLEG): container finished" podID="6369ade3-a8af-44b7-94be-736523f99512" containerID="87fb93e566a663de0b618099d7590f0336c0350139f6d4b367758d4d661aeb71" exitCode=0 Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.969117 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2x6cv" event={"ID":"6369ade3-a8af-44b7-94be-736523f99512","Type":"ContainerDied","Data":"87fb93e566a663de0b618099d7590f0336c0350139f6d4b367758d4d661aeb71"} Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.982075 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerStarted","Data":"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3"} Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.982317 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="ceilometer-notification-agent" containerID="cri-o://c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f" gracePeriod=30 Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.982413 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.982439 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="sg-core" containerID="cri-o://c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787" gracePeriod=30 Mar 10 19:10:47 crc kubenswrapper[4861]: I0310 19:10:47.982544 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="proxy-httpd" containerID="cri-o://e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3" gracePeriod=30 Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.015295 4861 scope.go:117] "RemoveContainer" containerID="77889d8b335ed9abb93b14387aed804af4a196f608a103719669d146103818c8" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.018066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.047892 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.061772 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6859597b94-bpqx5"] Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.147136 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.147437 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="dnsmasq-dns" containerID="cri-o://6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143" gracePeriod=10 Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.579907 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654259 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654395 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654432 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654451 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654556 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.654577 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fhgw\" (UniqueName: \"kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw\") pod \"9fb93010-cc9e-4984-91db-e36b6605fa2b\" (UID: \"9fb93010-cc9e-4984-91db-e36b6605fa2b\") " Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.661069 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw" (OuterVolumeSpecName: "kube-api-access-6fhgw") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "kube-api-access-6fhgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.697170 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.721128 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.731812 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.733115 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.737078 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config" (OuterVolumeSpecName: "config") pod "9fb93010-cc9e-4984-91db-e36b6605fa2b" (UID: "9fb93010-cc9e-4984-91db-e36b6605fa2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756896 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756928 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756939 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756947 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756956 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fb93010-cc9e-4984-91db-e36b6605fa2b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.756966 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fhgw\" (UniqueName: \"kubernetes.io/projected/9fb93010-cc9e-4984-91db-e36b6605fa2b-kube-api-access-6fhgw\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.867662 4861 scope.go:117] "RemoveContainer" containerID="4eb3c3c03c4cfc98f63a24b33a56fccb9c3dc632a0be220ab913b7c8c5d3a577" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.969001 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" path="/var/lib/kubelet/pods/28b3646a-3fb7-489b-b91f-fe3a0260c5c7/volumes" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.969614 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" path="/var/lib/kubelet/pods/821473ca-1fe9-4299-b5ff-d2a202fee1cc/volumes" Mar 10 19:10:48 crc kubenswrapper[4861]: I0310 19:10:48.970214 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" path="/var/lib/kubelet/pods/d1d0fe2f-b350-4bcd-8f3b-309092093033/volumes" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.004270 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerID="e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3" exitCode=0 Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.004373 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerID="c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787" exitCode=2 Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.004460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerDied","Data":"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3"} Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.004491 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerDied","Data":"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787"} Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.008375 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerID="6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143" exitCode=0 Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.008442 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.008448 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" event={"ID":"9fb93010-cc9e-4984-91db-e36b6605fa2b","Type":"ContainerDied","Data":"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143"} Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.008611 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-vd6rh" event={"ID":"9fb93010-cc9e-4984-91db-e36b6605fa2b","Type":"ContainerDied","Data":"6a1c322b259c6039585921b7d5d4af02af93029c4416ccd693bfa3a7a49ed180"} Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.008656 4861 scope.go:117] "RemoveContainer" containerID="6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.045810 4861 scope.go:117] "RemoveContainer" containerID="1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.051972 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.063312 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-vd6rh"] Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.136868 4861 scope.go:117] "RemoveContainer" containerID="6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143" Mar 10 19:10:49 crc kubenswrapper[4861]: E0310 19:10:49.152974 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143\": container with ID starting with 6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143 not found: ID does not exist" containerID="6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.153016 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143"} err="failed to get container status \"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143\": rpc error: code = NotFound desc = could not find container \"6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143\": container with ID starting with 6f81dc8462e77e0bd4879eefc7d34241267a5b86c818be39c806cf6f982fa143 not found: ID does not exist" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.153043 4861 scope.go:117] "RemoveContainer" containerID="1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541" Mar 10 19:10:49 crc kubenswrapper[4861]: E0310 19:10:49.158781 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541\": container with ID starting with 1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541 not found: ID does not exist" containerID="1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.158807 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541"} err="failed to get container status \"1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541\": rpc error: code = NotFound desc = could not find container \"1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541\": container with ID starting with 1df142d4a38b388c743082489dffaf4704ad052333a1c4a6da906b53a3f86541 not found: ID does not exist" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.478403 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572665 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572743 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572788 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572851 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572936 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfm6h\" (UniqueName: \"kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.572971 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data\") pod \"6369ade3-a8af-44b7-94be-736523f99512\" (UID: \"6369ade3-a8af-44b7-94be-736523f99512\") " Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.573257 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.573522 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6369ade3-a8af-44b7-94be-736523f99512-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.577809 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h" (OuterVolumeSpecName: "kube-api-access-nfm6h") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "kube-api-access-nfm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.578772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts" (OuterVolumeSpecName: "scripts") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.578865 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.600146 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.621917 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data" (OuterVolumeSpecName: "config-data") pod "6369ade3-a8af-44b7-94be-736523f99512" (UID: "6369ade3-a8af-44b7-94be-736523f99512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.674556 4861 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.674590 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.674600 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.674613 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6369ade3-a8af-44b7-94be-736523f99512-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.674647 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfm6h\" (UniqueName: \"kubernetes.io/projected/6369ade3-a8af-44b7-94be-736523f99512-kube-api-access-nfm6h\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:49 crc kubenswrapper[4861]: I0310 19:10:49.807265 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.030409 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2x6cv" event={"ID":"6369ade3-a8af-44b7-94be-736523f99512","Type":"ContainerDied","Data":"cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12"} Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.030505 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9e6865dd7d0daa84ec837f32ef1fa23a5ef916ee9e8a100f0f617ecef92a12" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.030606 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2x6cv" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.162297 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.311773 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312206 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312220 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312230 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6369ade3-a8af-44b7-94be-736523f99512" containerName="cinder-db-sync" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312235 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6369ade3-a8af-44b7-94be-736523f99512" containerName="cinder-db-sync" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312242 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="dnsmasq-dns" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312248 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="dnsmasq-dns" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312272 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312278 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312287 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312292 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api-log" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312301 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="init" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312306 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="init" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312318 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312324 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener-log" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312342 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312347 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api" Mar 10 19:10:50 crc kubenswrapper[4861]: E0310 19:10:50.312361 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312366 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312505 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312519 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" containerName="dnsmasq-dns" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312530 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312540 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="821473ca-1fe9-4299-b5ff-d2a202fee1cc" containerName="barbican-keystone-listener" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312549 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6369ade3-a8af-44b7-94be-736523f99512" containerName="cinder-db-sync" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312564 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312570 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b3646a-3fb7-489b-b91f-fe3a0260c5c7" containerName="barbican-api-log" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.312580 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d0fe2f-b350-4bcd-8f3b-309092093033" containerName="barbican-worker" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.313506 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.321150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.321233 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sd6lp" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.321339 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.321462 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.347759 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.363304 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.365036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395336 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395604 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395637 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jq9r\" (UniqueName: \"kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395716 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395731 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395815 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395854 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395878 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztkp\" (UniqueName: \"kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395897 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.395990 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.469688 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.471396 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.474027 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.495629 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496874 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kztkp\" (UniqueName: \"kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496936 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496955 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.496977 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497005 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497021 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497039 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jq9r\" (UniqueName: \"kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497110 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497127 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497150 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497174 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497201 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxjz\" (UniqueName: \"kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.497269 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.498109 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.499202 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.499241 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.499777 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.500947 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.501053 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.514022 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.519442 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.524498 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.524909 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.526267 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jq9r\" (UniqueName: \"kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r\") pod \"dnsmasq-dns-7b8fcc65cc-66b8w\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.572044 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kztkp\" (UniqueName: \"kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp\") pod \"cinder-scheduler-0\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600840 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxjz\" (UniqueName: \"kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600914 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.600942 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.603083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.603857 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.604943 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.608229 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.608662 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.612547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.615644 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxjz\" (UniqueName: \"kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz\") pod \"cinder-api-0\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.641274 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.669974 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.672111 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.681036 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.708739 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bd7n\" (UniqueName: \"kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.708827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.708958 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.708981 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.709031 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.709156 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.709183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd\") pod \"a8b38f86-cca2-4d33-bcea-b93c80da6490\" (UID: \"a8b38f86-cca2-4d33-bcea-b93c80da6490\") " Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.709532 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.709606 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.710197 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.712833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts" (OuterVolumeSpecName: "scripts") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.714481 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n" (OuterVolumeSpecName: "kube-api-access-5bd7n") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "kube-api-access-5bd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.783817 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.816616 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data" (OuterVolumeSpecName: "config-data") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.825156 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8b38f86-cca2-4d33-bcea-b93c80da6490" (UID: "a8b38f86-cca2-4d33-bcea-b93c80da6490"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826532 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826561 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b38f86-cca2-4d33-bcea-b93c80da6490-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826575 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bd7n\" (UniqueName: \"kubernetes.io/projected/a8b38f86-cca2-4d33-bcea-b93c80da6490-kube-api-access-5bd7n\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826591 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826602 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.826612 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b38f86-cca2-4d33-bcea-b93c80da6490-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:50 crc kubenswrapper[4861]: I0310 19:10:50.987496 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb93010-cc9e-4984-91db-e36b6605fa2b" path="/var/lib/kubelet/pods/9fb93010-cc9e-4984-91db-e36b6605fa2b/volumes" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.048565 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerID="c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f" exitCode=0 Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.048621 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerDied","Data":"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f"} Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.048655 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b38f86-cca2-4d33-bcea-b93c80da6490","Type":"ContainerDied","Data":"aac59d8ed9f3f456c8fc0f9a7f4f667d8c750e3c0ab744c4db54c987603dc388"} Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.048681 4861 scope.go:117] "RemoveContainer" containerID="e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.048938 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.091132 4861 scope.go:117] "RemoveContainer" containerID="c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.096766 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.106441 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114019 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.114366 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="sg-core" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114381 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="sg-core" Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.114393 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="ceilometer-notification-agent" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114399 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="ceilometer-notification-agent" Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.114431 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="proxy-httpd" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114454 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="proxy-httpd" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114612 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="proxy-httpd" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114622 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="sg-core" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.114635 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" containerName="ceilometer-notification-agent" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.116099 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.119349 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.119509 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.138656 4861 scope.go:117] "RemoveContainer" containerID="c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.143439 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.173545 4861 scope.go:117] "RemoveContainer" containerID="e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3" Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.174028 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3\": container with ID starting with e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3 not found: ID does not exist" containerID="e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.174094 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3"} err="failed to get container status \"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3\": rpc error: code = NotFound desc = could not find container \"e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3\": container with ID starting with e5be897b7b37e0d6f93877bceb33195156cb97e3d8645a5f759e001943de17a3 not found: ID does not exist" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.174123 4861 scope.go:117] "RemoveContainer" containerID="c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787" Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.177953 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787\": container with ID starting with c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787 not found: ID does not exist" containerID="c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.177981 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787"} err="failed to get container status \"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787\": rpc error: code = NotFound desc = could not find container \"c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787\": container with ID starting with c3583d364195930c668851631310fcf2624c721aad140d433c315bc623ae2787 not found: ID does not exist" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.177996 4861 scope.go:117] "RemoveContainer" containerID="c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f" Mar 10 19:10:51 crc kubenswrapper[4861]: E0310 19:10:51.182701 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f\": container with ID starting with c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f not found: ID does not exist" containerID="c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.182748 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f"} err="failed to get container status \"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f\": rpc error: code = NotFound desc = could not find container \"c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f\": container with ID starting with c1cfd8c2d037d897aa7c7b0d467efae470cd2b432b0be43688aae6e50fc8242f not found: ID does not exist" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.214660 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.235650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.235926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phdd\" (UniqueName: \"kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.235963 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.236040 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.236077 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.236127 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.236146 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.321413 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.336283 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:51 crc kubenswrapper[4861]: W0310 19:10:51.336930 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f628fa1_5ccd_4dc9_a402_631e290b5ace.slice/crio-34fed35e50063ff922d5b4b7d58def64c2e4172a860433af4e40ddbdcf5d33a0 WatchSource:0}: Error finding container 34fed35e50063ff922d5b4b7d58def64c2e4172a860433af4e40ddbdcf5d33a0: Status 404 returned error can't find the container with id 34fed35e50063ff922d5b4b7d58def64c2e4172a860433af4e40ddbdcf5d33a0 Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337567 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337654 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phdd\" (UniqueName: \"kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337811 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337887 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.337945 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.339064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.339295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.345881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.346578 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.347091 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.355311 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.356064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phdd\" (UniqueName: \"kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd\") pod \"ceilometer-0\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.445139 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:10:51 crc kubenswrapper[4861]: I0310 19:10:51.878456 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.059807 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerStarted","Data":"72b3e0082e8933f7d616c91569e3308593d3fe224a50421d87344c88c56424d8"} Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.063284 4861 generic.go:334] "Generic (PLEG): container finished" podID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerID="23a622a31ff26325c70d4dd50250f161ca20a59ae12c3c4588e78f463eace263" exitCode=0 Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.063393 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" event={"ID":"1839f77d-af3b-46f9-87f9-3fb81e3daa90","Type":"ContainerDied","Data":"23a622a31ff26325c70d4dd50250f161ca20a59ae12c3c4588e78f463eace263"} Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.063453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" event={"ID":"1839f77d-af3b-46f9-87f9-3fb81e3daa90","Type":"ContainerStarted","Data":"a02e8a5252df2dd6178d8c517abf7cb8bdba078aa67279a1143b2748a4f97882"} Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.071427 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerStarted","Data":"33f45d010cb9654f79952431900eca1d601fb130df4b60b0a7f4f14e1455d7f1"} Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.074982 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerStarted","Data":"34fed35e50063ff922d5b4b7d58def64c2e4172a860433af4e40ddbdcf5d33a0"} Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.728344 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:52 crc kubenswrapper[4861]: I0310 19:10:52.972143 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b38f86-cca2-4d33-bcea-b93c80da6490" path="/var/lib/kubelet/pods/a8b38f86-cca2-4d33-bcea-b93c80da6490/volumes" Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.086476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerStarted","Data":"3f2a8510143998b73be24a1438769ee92d406757027672985e8a3acb36c3c724"} Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.089112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerStarted","Data":"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d"} Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.089151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerStarted","Data":"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79"} Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.090303 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerStarted","Data":"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9"} Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.091608 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" event={"ID":"1839f77d-af3b-46f9-87f9-3fb81e3daa90","Type":"ContainerStarted","Data":"859b5cde1af48351f056dde64d7287f25a5a5969897f2387b86f15568becd4b9"} Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.091836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.114826 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" podStartSLOduration=3.114805858 podStartE2EDuration="3.114805858s" podCreationTimestamp="2026-03-10 19:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:53.10659647 +0000 UTC m=+1396.870032440" watchObservedRunningTime="2026-03-10 19:10:53.114805858 +0000 UTC m=+1396.878241818" Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.284532 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.382477 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.487165 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.487369 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d74d76988-ck77x" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api-log" containerID="cri-o://10a0614cb3d2e9f3cc63de42d9e0267ef3020f28f77569843470867d1ee93a7e" gracePeriod=30 Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.488006 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d74d76988-ck77x" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api" containerID="cri-o://ad3ae82d4dbe854427fcf01d3a73ec902807e9cda4f1ff64c48711716b2cdf24" gracePeriod=30 Mar 10 19:10:53 crc kubenswrapper[4861]: I0310 19:10:53.506118 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7d74d76988-ck77x" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.101275 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerStarted","Data":"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271"} Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.101540 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerStarted","Data":"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8"} Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.118839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerStarted","Data":"d74754d78321eddc9a49cd074303b4bd842d69d279bdcc85f05bac288ecab0e1"} Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.127872 4861 generic.go:334] "Generic (PLEG): container finished" podID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerID="10a0614cb3d2e9f3cc63de42d9e0267ef3020f28f77569843470867d1ee93a7e" exitCode=143 Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.127981 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerDied","Data":"10a0614cb3d2e9f3cc63de42d9e0267ef3020f28f77569843470867d1ee93a7e"} Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.128364 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api-log" containerID="cri-o://47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" gracePeriod=30 Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.128381 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api" containerID="cri-o://749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" gracePeriod=30 Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.139740 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.178310563 podStartE2EDuration="4.139723334s" podCreationTimestamp="2026-03-10 19:10:50 +0000 UTC" firstStartedPulling="2026-03-10 19:10:51.24072942 +0000 UTC m=+1395.004165380" lastFinishedPulling="2026-03-10 19:10:52.202142191 +0000 UTC m=+1395.965578151" observedRunningTime="2026-03-10 19:10:54.135591759 +0000 UTC m=+1397.899027719" watchObservedRunningTime="2026-03-10 19:10:54.139723334 +0000 UTC m=+1397.903159294" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.156669 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.156653144 podStartE2EDuration="4.156653144s" podCreationTimestamp="2026-03-10 19:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:54.153183987 +0000 UTC m=+1397.916619957" watchObservedRunningTime="2026-03-10 19:10:54.156653144 +0000 UTC m=+1397.920089104" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.718437 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827494 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827546 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxjz\" (UniqueName: \"kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827584 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827660 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827679 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827674 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827722 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.827882 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts\") pod \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\" (UID: \"7f628fa1-5ccd-4dc9-a402-631e290b5ace\") " Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.828044 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs" (OuterVolumeSpecName: "logs") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.828733 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f628fa1-5ccd-4dc9-a402-631e290b5ace-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.828751 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f628fa1-5ccd-4dc9-a402-631e290b5ace-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.834883 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.835106 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts" (OuterVolumeSpecName: "scripts") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.836816 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz" (OuterVolumeSpecName: "kube-api-access-ggxjz") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "kube-api-access-ggxjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.856845 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.894771 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data" (OuterVolumeSpecName: "config-data") pod "7f628fa1-5ccd-4dc9-a402-631e290b5ace" (UID: "7f628fa1-5ccd-4dc9-a402-631e290b5ace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.931230 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.931285 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxjz\" (UniqueName: \"kubernetes.io/projected/7f628fa1-5ccd-4dc9-a402-631e290b5ace-kube-api-access-ggxjz\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.931306 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.931323 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:54 crc kubenswrapper[4861]: I0310 19:10:54.931341 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f628fa1-5ccd-4dc9-a402-631e290b5ace-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.141694 4861 generic.go:334] "Generic (PLEG): container finished" podID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerID="749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" exitCode=0 Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.141982 4861 generic.go:334] "Generic (PLEG): container finished" podID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerID="47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" exitCode=143 Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.141799 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.141852 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerDied","Data":"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d"} Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.142169 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerDied","Data":"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79"} Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.142210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7f628fa1-5ccd-4dc9-a402-631e290b5ace","Type":"ContainerDied","Data":"34fed35e50063ff922d5b4b7d58def64c2e4172a860433af4e40ddbdcf5d33a0"} Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.142245 4861 scope.go:117] "RemoveContainer" containerID="749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.175040 4861 scope.go:117] "RemoveContainer" containerID="47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.183872 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.201607 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.210673 4861 scope.go:117] "RemoveContainer" containerID="749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" Mar 10 19:10:55 crc kubenswrapper[4861]: E0310 19:10:55.211210 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d\": container with ID starting with 749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d not found: ID does not exist" containerID="749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.211244 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d"} err="failed to get container status \"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d\": rpc error: code = NotFound desc = could not find container \"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d\": container with ID starting with 749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d not found: ID does not exist" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.211265 4861 scope.go:117] "RemoveContainer" containerID="47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.212427 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:55 crc kubenswrapper[4861]: E0310 19:10:55.213621 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79\": container with ID starting with 47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79 not found: ID does not exist" containerID="47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.213862 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79"} err="failed to get container status \"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79\": rpc error: code = NotFound desc = could not find container \"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79\": container with ID starting with 47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79 not found: ID does not exist" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.213935 4861 scope.go:117] "RemoveContainer" containerID="749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.215044 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d"} err="failed to get container status \"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d\": rpc error: code = NotFound desc = could not find container \"749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d\": container with ID starting with 749ad4fe539cfc02ac20c8755d4686df858895fd3110c77b7e8db0e356f2569d not found: ID does not exist" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.215090 4861 scope.go:117] "RemoveContainer" containerID="47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.215436 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79"} err="failed to get container status \"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79\": rpc error: code = NotFound desc = could not find container \"47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79\": container with ID starting with 47d0a947b04f6a7a9261e2682f5da8632fb86a197b77c62251401f449214de79 not found: ID does not exist" Mar 10 19:10:55 crc kubenswrapper[4861]: E0310 19:10:55.216500 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.216535 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api" Mar 10 19:10:55 crc kubenswrapper[4861]: E0310 19:10:55.216622 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api-log" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.216677 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api-log" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.217085 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.217136 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" containerName="cinder-api-log" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.218956 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.221658 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.222095 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.222644 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.225069 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.343632 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.343698 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdgf\" (UniqueName: \"kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.343738 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.343904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.343935 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.344015 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.344189 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.344270 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.344465 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446095 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446293 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446482 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446521 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdgf\" (UniqueName: \"kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446557 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.446658 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.447841 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.448021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.451357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.464461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.464586 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.464668 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.465115 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.465853 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.472775 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdgf\" (UniqueName: \"kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf\") pod \"cinder-api-0\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.536826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:10:55 crc kubenswrapper[4861]: I0310 19:10:55.643475 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 10 19:10:56 crc kubenswrapper[4861]: I0310 19:10:56.086576 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:10:56 crc kubenswrapper[4861]: I0310 19:10:56.154601 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerStarted","Data":"d881139b2fd2ac6b219cce8335c0620d09b01dbc221ba8d72cb33b9d343c1ec7"} Mar 10 19:10:56 crc kubenswrapper[4861]: I0310 19:10:56.982928 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f628fa1-5ccd-4dc9-a402-631e290b5ace" path="/var/lib/kubelet/pods/7f628fa1-5ccd-4dc9-a402-631e290b5ace/volumes" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.187090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerStarted","Data":"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb"} Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.187185 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.190034 4861 generic.go:334] "Generic (PLEG): container finished" podID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerID="ad3ae82d4dbe854427fcf01d3a73ec902807e9cda4f1ff64c48711716b2cdf24" exitCode=0 Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.190086 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerDied","Data":"ad3ae82d4dbe854427fcf01d3a73ec902807e9cda4f1ff64c48711716b2cdf24"} Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.193919 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerStarted","Data":"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8"} Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.222345 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.233306571 podStartE2EDuration="6.222316238s" podCreationTimestamp="2026-03-10 19:10:51 +0000 UTC" firstStartedPulling="2026-03-10 19:10:51.870873078 +0000 UTC m=+1395.634309038" lastFinishedPulling="2026-03-10 19:10:55.859882745 +0000 UTC m=+1399.623318705" observedRunningTime="2026-03-10 19:10:57.205061579 +0000 UTC m=+1400.968497569" watchObservedRunningTime="2026-03-10 19:10:57.222316238 +0000 UTC m=+1400.985752238" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.246773 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs\") pod \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom\") pod \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382374 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffwzz\" (UniqueName: \"kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz\") pod \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382375 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs" (OuterVolumeSpecName: "logs") pod "2ea426e3-70e7-4b8a-a04f-e899f032bca5" (UID: "2ea426e3-70e7-4b8a-a04f-e899f032bca5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382410 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle\") pod \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.382486 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data\") pod \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\" (UID: \"2ea426e3-70e7-4b8a-a04f-e899f032bca5\") " Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.383078 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea426e3-70e7-4b8a-a04f-e899f032bca5-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.387978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz" (OuterVolumeSpecName: "kube-api-access-ffwzz") pod "2ea426e3-70e7-4b8a-a04f-e899f032bca5" (UID: "2ea426e3-70e7-4b8a-a04f-e899f032bca5"). InnerVolumeSpecName "kube-api-access-ffwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.390309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2ea426e3-70e7-4b8a-a04f-e899f032bca5" (UID: "2ea426e3-70e7-4b8a-a04f-e899f032bca5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.424791 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea426e3-70e7-4b8a-a04f-e899f032bca5" (UID: "2ea426e3-70e7-4b8a-a04f-e899f032bca5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.466618 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data" (OuterVolumeSpecName: "config-data") pod "2ea426e3-70e7-4b8a-a04f-e899f032bca5" (UID: "2ea426e3-70e7-4b8a-a04f-e899f032bca5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.485373 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.485411 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffwzz\" (UniqueName: \"kubernetes.io/projected/2ea426e3-70e7-4b8a-a04f-e899f032bca5-kube-api-access-ffwzz\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.485426 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:57 crc kubenswrapper[4861]: I0310 19:10:57.485439 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea426e3-70e7-4b8a-a04f-e899f032bca5-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.211118 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerStarted","Data":"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729"} Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.211655 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.220968 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74d76988-ck77x" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.221947 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74d76988-ck77x" event={"ID":"2ea426e3-70e7-4b8a-a04f-e899f032bca5","Type":"ContainerDied","Data":"e1f2923967f53ea2adfa0fc7c05053bee8c0e896bd71cbe994c4052a146c4e68"} Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.222019 4861 scope.go:117] "RemoveContainer" containerID="ad3ae82d4dbe854427fcf01d3a73ec902807e9cda4f1ff64c48711716b2cdf24" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.233441 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.23342399 podStartE2EDuration="3.23342399s" podCreationTimestamp="2026-03-10 19:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:10:58.232520395 +0000 UTC m=+1401.995956415" watchObservedRunningTime="2026-03-10 19:10:58.23342399 +0000 UTC m=+1401.996859980" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.292229 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.303409 4861 scope.go:117] "RemoveContainer" containerID="10a0614cb3d2e9f3cc63de42d9e0267ef3020f28f77569843470867d1ee93a7e" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.307250 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7d74d76988-ck77x"] Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.944874 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:10:58 crc kubenswrapper[4861]: I0310 19:10:58.985948 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" path="/var/lib/kubelet/pods/2ea426e3-70e7-4b8a-a04f-e899f032bca5/volumes" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.253554 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.254308 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84bb44bb99-8g2vb" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-api" containerID="cri-o://c1c8d15feff5e5cdb5fefa2d7b2105539922ecb219a45be426bb5a8c8841edb8" gracePeriod=30 Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.254397 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84bb44bb99-8g2vb" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" containerID="cri-o://a3b089250c795f2618f460b04f1cfb948fb4c25d02b633b7c39764754740cb3c" gracePeriod=30 Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.265967 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-84bb44bb99-8g2vb" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": EOF" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.290828 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:10:59 crc kubenswrapper[4861]: E0310 19:10:59.291228 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.291239 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api" Mar 10 19:10:59 crc kubenswrapper[4861]: E0310 19:10:59.291278 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api-log" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.291284 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api-log" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.291444 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.291458 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea426e3-70e7-4b8a-a04f-e899f032bca5" containerName="barbican-api-log" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.292381 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.300052 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.432909 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.432994 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.433047 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9648m\" (UniqueName: \"kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.433085 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.433135 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.433216 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.433320 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535437 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535514 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535547 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9648m\" (UniqueName: \"kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535587 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535633 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.535796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.543335 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.543428 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.543505 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.548305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.554378 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.554540 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.560329 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9648m\" (UniqueName: \"kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m\") pod \"neutron-69987f456f-bjbjk\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:10:59 crc kubenswrapper[4861]: I0310 19:10:59.659173 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.230247 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:11:00 crc kubenswrapper[4861]: W0310 19:11:00.232333 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod692615cd_3dd2_4970_9d35_63073e2403ba.slice/crio-6e062d912ba3bd3d7621ccdddf5f34b7f49c98dbafbe2c1e56e53fce35c8525d WatchSource:0}: Error finding container 6e062d912ba3bd3d7621ccdddf5f34b7f49c98dbafbe2c1e56e53fce35c8525d: Status 404 returned error can't find the container with id 6e062d912ba3bd3d7621ccdddf5f34b7f49c98dbafbe2c1e56e53fce35c8525d Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.241022 4861 generic.go:334] "Generic (PLEG): container finished" podID="06936988-eb27-45c1-882e-890cca4cddfe" containerID="a3b089250c795f2618f460b04f1cfb948fb4c25d02b633b7c39764754740cb3c" exitCode=0 Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.241066 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerDied","Data":"a3b089250c795f2618f460b04f1cfb948fb4c25d02b633b7c39764754740cb3c"} Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.683881 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.754821 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.755093 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="dnsmasq-dns" containerID="cri-o://a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3" gracePeriod=10 Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.890002 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 10 19:11:00 crc kubenswrapper[4861]: I0310 19:11:00.957583 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.195065 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.261544 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerStarted","Data":"8c0527061a420c2c42f4d25de783f4827f7ddac4b9a8e5486e1239c416673684"} Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.261589 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerStarted","Data":"c33900d456694805e616c7853b9af4d757f2fbf528a6cc4897258f4c25f58c83"} Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.261600 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerStarted","Data":"6e062d912ba3bd3d7621ccdddf5f34b7f49c98dbafbe2c1e56e53fce35c8525d"} Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.262801 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268252 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268370 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268455 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268526 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268561 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsndl\" (UniqueName: \"kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268586 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb\") pod \"f47f4213-5802-4231-a1ac-f826c52e6434\" (UID: \"f47f4213-5802-4231-a1ac-f826c52e6434\") " Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.268769 4861 generic.go:334] "Generic (PLEG): container finished" podID="f47f4213-5802-4231-a1ac-f826c52e6434" containerID="a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3" exitCode=0 Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.269041 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="cinder-scheduler" containerID="cri-o://3f2a8510143998b73be24a1438769ee92d406757027672985e8a3acb36c3c724" gracePeriod=30 Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.269594 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.270129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" event={"ID":"f47f4213-5802-4231-a1ac-f826c52e6434","Type":"ContainerDied","Data":"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3"} Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.270169 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-wp6ch" event={"ID":"f47f4213-5802-4231-a1ac-f826c52e6434","Type":"ContainerDied","Data":"2e0437b931c3ce3974e4ebe88dea2b1df05c9cd1bcff6d608deb84f78dd274d3"} Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.270191 4861 scope.go:117] "RemoveContainer" containerID="a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.270361 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="probe" containerID="cri-o://d74754d78321eddc9a49cd074303b4bd842d69d279bdcc85f05bac288ecab0e1" gracePeriod=30 Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.275019 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl" (OuterVolumeSpecName: "kube-api-access-tsndl") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "kube-api-access-tsndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.301002 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69987f456f-bjbjk" podStartSLOduration=2.3009864970000002 podStartE2EDuration="2.300986497s" podCreationTimestamp="2026-03-10 19:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:01.285841045 +0000 UTC m=+1405.049277025" watchObservedRunningTime="2026-03-10 19:11:01.300986497 +0000 UTC m=+1405.064422447" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.312323 4861 scope.go:117] "RemoveContainer" containerID="65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.334070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config" (OuterVolumeSpecName: "config") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.334145 4861 scope.go:117] "RemoveContainer" containerID="a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3" Mar 10 19:11:01 crc kubenswrapper[4861]: E0310 19:11:01.334736 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3\": container with ID starting with a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3 not found: ID does not exist" containerID="a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.334794 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3"} err="failed to get container status \"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3\": rpc error: code = NotFound desc = could not find container \"a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3\": container with ID starting with a212161bdbdd2fa85d5b54a024b19064fd22a0511dd519367758786efd0612a3 not found: ID does not exist" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.334819 4861 scope.go:117] "RemoveContainer" containerID="65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4" Mar 10 19:11:01 crc kubenswrapper[4861]: E0310 19:11:01.335170 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4\": container with ID starting with 65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4 not found: ID does not exist" containerID="65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.335227 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4"} err="failed to get container status \"65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4\": rpc error: code = NotFound desc = could not find container \"65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4\": container with ID starting with 65acecbb995bd5b647616fd126a73925751e736b52a02d020201d7d8827988b4 not found: ID does not exist" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.335410 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.347898 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.349323 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.350298 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f47f4213-5802-4231-a1ac-f826c52e6434" (UID: "f47f4213-5802-4231-a1ac-f826c52e6434"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379391 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379441 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379453 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsndl\" (UniqueName: \"kubernetes.io/projected/f47f4213-5802-4231-a1ac-f826c52e6434-kube-api-access-tsndl\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379464 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379473 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.379480 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f47f4213-5802-4231-a1ac-f826c52e6434-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.613952 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.621779 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-wp6ch"] Mar 10 19:11:01 crc kubenswrapper[4861]: I0310 19:11:01.989451 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-84bb44bb99-8g2vb" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": dial tcp 10.217.0.157:9696: connect: connection refused" Mar 10 19:11:02 crc kubenswrapper[4861]: I0310 19:11:02.284015 4861 generic.go:334] "Generic (PLEG): container finished" podID="fb49c733-a125-45b0-9e14-d3620e66970c" containerID="d74754d78321eddc9a49cd074303b4bd842d69d279bdcc85f05bac288ecab0e1" exitCode=0 Mar 10 19:11:02 crc kubenswrapper[4861]: I0310 19:11:02.284137 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerDied","Data":"d74754d78321eddc9a49cd074303b4bd842d69d279bdcc85f05bac288ecab0e1"} Mar 10 19:11:02 crc kubenswrapper[4861]: I0310 19:11:02.971438 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" path="/var/lib/kubelet/pods/f47f4213-5802-4231-a1ac-f826c52e6434/volumes" Mar 10 19:11:04 crc kubenswrapper[4861]: I0310 19:11:04.324423 4861 generic.go:334] "Generic (PLEG): container finished" podID="fb49c733-a125-45b0-9e14-d3620e66970c" containerID="3f2a8510143998b73be24a1438769ee92d406757027672985e8a3acb36c3c724" exitCode=0 Mar 10 19:11:04 crc kubenswrapper[4861]: I0310 19:11:04.324809 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerDied","Data":"3f2a8510143998b73be24a1438769ee92d406757027672985e8a3acb36c3c724"} Mar 10 19:11:04 crc kubenswrapper[4861]: I0310 19:11:04.933403 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059410 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059638 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kztkp\" (UniqueName: \"kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059729 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059764 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.059808 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data\") pod \"fb49c733-a125-45b0-9e14-d3620e66970c\" (UID: \"fb49c733-a125-45b0-9e14-d3620e66970c\") " Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.060362 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb49c733-a125-45b0-9e14-d3620e66970c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.068869 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts" (OuterVolumeSpecName: "scripts") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.068879 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp" (OuterVolumeSpecName: "kube-api-access-kztkp") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "kube-api-access-kztkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.068949 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.151484 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.162143 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.162197 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kztkp\" (UniqueName: \"kubernetes.io/projected/fb49c733-a125-45b0-9e14-d3620e66970c-kube-api-access-kztkp\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.162211 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.162222 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.176927 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data" (OuterVolumeSpecName: "config-data") pod "fb49c733-a125-45b0-9e14-d3620e66970c" (UID: "fb49c733-a125-45b0-9e14-d3620e66970c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.263441 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb49c733-a125-45b0-9e14-d3620e66970c-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.333094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fb49c733-a125-45b0-9e14-d3620e66970c","Type":"ContainerDied","Data":"33f45d010cb9654f79952431900eca1d601fb130df4b60b0a7f4f14e1455d7f1"} Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.333311 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.333405 4861 scope.go:117] "RemoveContainer" containerID="d74754d78321eddc9a49cd074303b4bd842d69d279bdcc85f05bac288ecab0e1" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.357495 4861 scope.go:117] "RemoveContainer" containerID="3f2a8510143998b73be24a1438769ee92d406757027672985e8a3acb36c3c724" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.378151 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.387642 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.411882 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:05 crc kubenswrapper[4861]: E0310 19:11:05.412216 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="init" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412231 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="init" Mar 10 19:11:05 crc kubenswrapper[4861]: E0310 19:11:05.412241 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="dnsmasq-dns" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412248 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="dnsmasq-dns" Mar 10 19:11:05 crc kubenswrapper[4861]: E0310 19:11:05.412258 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="cinder-scheduler" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412264 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="cinder-scheduler" Mar 10 19:11:05 crc kubenswrapper[4861]: E0310 19:11:05.412293 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="probe" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412298 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="probe" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412452 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="cinder-scheduler" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412466 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f47f4213-5802-4231-a1ac-f826c52e6434" containerName="dnsmasq-dns" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.412480 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" containerName="probe" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.413295 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.417077 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.437754 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466229 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466278 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5npx2\" (UniqueName: \"kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466322 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466385 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466423 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.466442 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567735 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567794 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567889 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5npx2\" (UniqueName: \"kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.567922 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.568924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.573217 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.573251 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.573560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.573863 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.584351 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5npx2\" (UniqueName: \"kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2\") pod \"cinder-scheduler-0\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " pod="openstack/cinder-scheduler-0" Mar 10 19:11:05 crc kubenswrapper[4861]: I0310 19:11:05.733959 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:11:06 crc kubenswrapper[4861]: I0310 19:11:06.273065 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:11:06 crc kubenswrapper[4861]: I0310 19:11:06.365901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerStarted","Data":"0dcdea6b1679c0252481963ecf23fffd0c90847e454359d0dea47ee1537c3968"} Mar 10 19:11:06 crc kubenswrapper[4861]: I0310 19:11:06.987560 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb49c733-a125-45b0-9e14-d3620e66970c" path="/var/lib/kubelet/pods/fb49c733-a125-45b0-9e14-d3620e66970c/volumes" Mar 10 19:11:07 crc kubenswrapper[4861]: I0310 19:11:07.332153 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 10 19:11:07 crc kubenswrapper[4861]: I0310 19:11:07.386056 4861 generic.go:334] "Generic (PLEG): container finished" podID="06936988-eb27-45c1-882e-890cca4cddfe" containerID="c1c8d15feff5e5cdb5fefa2d7b2105539922ecb219a45be426bb5a8c8841edb8" exitCode=0 Mar 10 19:11:07 crc kubenswrapper[4861]: I0310 19:11:07.386193 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerDied","Data":"c1c8d15feff5e5cdb5fefa2d7b2105539922ecb219a45be426bb5a8c8841edb8"} Mar 10 19:11:07 crc kubenswrapper[4861]: I0310 19:11:07.406136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerStarted","Data":"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec"} Mar 10 19:11:07 crc kubenswrapper[4861]: I0310 19:11:07.992035 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137367 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137500 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137521 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqpgv\" (UniqueName: \"kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137549 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137658 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.137699 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config\") pod \"06936988-eb27-45c1-882e-890cca4cddfe\" (UID: \"06936988-eb27-45c1-882e-890cca4cddfe\") " Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.158183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv" (OuterVolumeSpecName: "kube-api-access-wqpgv") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "kube-api-access-wqpgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.160966 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.239159 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.239200 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqpgv\" (UniqueName: \"kubernetes.io/projected/06936988-eb27-45c1-882e-890cca4cddfe-kube-api-access-wqpgv\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.252880 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config" (OuterVolumeSpecName: "config") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.253121 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.267806 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.292803 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.309249 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06936988-eb27-45c1-882e-890cca4cddfe" (UID: "06936988-eb27-45c1-882e-890cca4cddfe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.342364 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.342398 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.342409 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.342417 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.342425 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/06936988-eb27-45c1-882e-890cca4cddfe-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.435869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84bb44bb99-8g2vb" event={"ID":"06936988-eb27-45c1-882e-890cca4cddfe","Type":"ContainerDied","Data":"cbdf0efdeae49fcc08a801b3cb1fdf7cd3226d9dc24a8de68fbdfe890799cbc4"} Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.435918 4861 scope.go:117] "RemoveContainer" containerID="a3b089250c795f2618f460b04f1cfb948fb4c25d02b633b7c39764754740cb3c" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.436049 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84bb44bb99-8g2vb" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.472449 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerStarted","Data":"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886"} Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.474136 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.492023 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84bb44bb99-8g2vb"] Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.495861 4861 scope.go:117] "RemoveContainer" containerID="c1c8d15feff5e5cdb5fefa2d7b2105539922ecb219a45be426bb5a8c8841edb8" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.500129 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.500115942 podStartE2EDuration="3.500115942s" podCreationTimestamp="2026-03-10 19:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:08.49717992 +0000 UTC m=+1412.260615890" watchObservedRunningTime="2026-03-10 19:11:08.500115942 +0000 UTC m=+1412.263551892" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.543084 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.566057 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:11:08 crc kubenswrapper[4861]: I0310 19:11:08.971852 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06936988-eb27-45c1-882e-890cca4cddfe" path="/var/lib/kubelet/pods/06936988-eb27-45c1-882e-890cca4cddfe/volumes" Mar 10 19:11:09 crc kubenswrapper[4861]: I0310 19:11:09.210157 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:11:09 crc kubenswrapper[4861]: I0310 19:11:09.518461 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:11:09 crc kubenswrapper[4861]: I0310 19:11:09.852333 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:11:09 crc kubenswrapper[4861]: I0310 19:11:09.970218 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:11:10 crc kubenswrapper[4861]: I0310 19:11:10.492457 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f85b8cc7d-lblq8" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-log" containerID="cri-o://deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e" gracePeriod=30 Mar 10 19:11:10 crc kubenswrapper[4861]: I0310 19:11:10.492503 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f85b8cc7d-lblq8" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-api" containerID="cri-o://bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da" gracePeriod=30 Mar 10 19:11:10 crc kubenswrapper[4861]: I0310 19:11:10.734983 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 10 19:11:11 crc kubenswrapper[4861]: I0310 19:11:11.502994 4861 generic.go:334] "Generic (PLEG): container finished" podID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerID="deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e" exitCode=143 Mar 10 19:11:11 crc kubenswrapper[4861]: I0310 19:11:11.503057 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerDied","Data":"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e"} Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.426854 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 10 19:11:13 crc kubenswrapper[4861]: E0310 19:11:13.427576 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.427592 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" Mar 10 19:11:13 crc kubenswrapper[4861]: E0310 19:11:13.427612 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-api" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.427620 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-api" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.427882 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-api" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.427906 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="06936988-eb27-45c1-882e-890cca4cddfe" containerName="neutron-httpd" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.428564 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.433847 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pkgmj" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.434205 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.434288 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.441053 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.543410 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.543476 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.543567 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpxz8\" (UniqueName: \"kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.543586 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.645402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.645548 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.645747 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpxz8\" (UniqueName: \"kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.645776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.648063 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.653319 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.653472 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.661970 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpxz8\" (UniqueName: \"kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8\") pod \"openstackclient\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " pod="openstack/openstackclient" Mar 10 19:11:13 crc kubenswrapper[4861]: I0310 19:11:13.802621 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.075375 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.142935 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.146927 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-central-agent" containerID="cri-o://f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9" gracePeriod=30 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.147388 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="proxy-httpd" containerID="cri-o://0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb" gracePeriod=30 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.147448 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="sg-core" containerID="cri-o://bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271" gracePeriod=30 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.147488 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-notification-agent" containerID="cri-o://b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8" gracePeriod=30 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.159918 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.159991 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.160083 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.160119 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw7zn\" (UniqueName: \"kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.160214 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.160316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.160349 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data\") pod \"ba01933c-1abf-473e-b55a-8e9ec135d938\" (UID: \"ba01933c-1abf-473e-b55a-8e9ec135d938\") " Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.166760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn" (OuterVolumeSpecName: "kube-api-access-jw7zn") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "kube-api-access-jw7zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.167508 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts" (OuterVolumeSpecName: "scripts") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.174395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs" (OuterVolumeSpecName: "logs") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.215285 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data" (OuterVolumeSpecName: "config-data") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.221697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.253828 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.172:3000/\": read tcp 10.217.0.2:52520->10.217.0.172:3000: read: connection reset by peer" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.267620 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.267649 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw7zn\" (UniqueName: \"kubernetes.io/projected/ba01933c-1abf-473e-b55a-8e9ec135d938-kube-api-access-jw7zn\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.267677 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.267687 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.267696 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba01933c-1abf-473e-b55a-8e9ec135d938-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.270855 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:11:14 crc kubenswrapper[4861]: E0310 19:11:14.271220 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-api" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.271233 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-api" Mar 10 19:11:14 crc kubenswrapper[4861]: E0310 19:11:14.271252 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-log" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.271258 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-log" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.272761 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-log" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.272782 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerName="placement-api" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.273664 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.276433 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.276891 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.277029 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.286532 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.332060 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.332144 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.332881 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ba01933c-1abf-473e-b55a-8e9ec135d938" (UID: "ba01933c-1abf-473e-b55a-8e9ec135d938"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.371726 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.371779 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.371860 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.371893 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.371939 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.372009 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzmq\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.372051 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.372072 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.372124 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.372137 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba01933c-1abf-473e-b55a-8e9ec135d938-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.473886 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475156 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475266 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475421 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzmq\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475604 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475687 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.475893 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.476442 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.476954 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.478103 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.481504 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.481744 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.483200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.487547 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.493269 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzmq\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq\") pod \"swift-proxy-6d6c7bd6d5-klrmq\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534421 4861 generic.go:334] "Generic (PLEG): container finished" podID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerID="0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb" exitCode=0 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534451 4861 generic.go:334] "Generic (PLEG): container finished" podID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerID="bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271" exitCode=2 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534459 4861 generic.go:334] "Generic (PLEG): container finished" podID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerID="b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8" exitCode=0 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerDied","Data":"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerDied","Data":"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.534564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerDied","Data":"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.536586 4861 generic.go:334] "Generic (PLEG): container finished" podID="ba01933c-1abf-473e-b55a-8e9ec135d938" containerID="bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da" exitCode=0 Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.536621 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerDied","Data":"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.536656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f85b8cc7d-lblq8" event={"ID":"ba01933c-1abf-473e-b55a-8e9ec135d938","Type":"ContainerDied","Data":"331245e2e3c0b3e4cb3850eea7daaf47e042acfbd4e39a58f2a3945658ae6834"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.536740 4861 scope.go:117] "RemoveContainer" containerID="bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.536840 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f85b8cc7d-lblq8" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.537859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9007f85d-dd41-49ed-9a6f-c2b09b26fad2","Type":"ContainerStarted","Data":"0ea0894beb1ff46dd2d99501bdc0969dad7d4b269982cde02537f2d5b06c939d"} Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.581741 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.590309 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7f85b8cc7d-lblq8"] Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.599615 4861 scope.go:117] "RemoveContainer" containerID="deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.616934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.618322 4861 scope.go:117] "RemoveContainer" containerID="bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da" Mar 10 19:11:14 crc kubenswrapper[4861]: E0310 19:11:14.623318 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da\": container with ID starting with bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da not found: ID does not exist" containerID="bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.623360 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da"} err="failed to get container status \"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da\": rpc error: code = NotFound desc = could not find container \"bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da\": container with ID starting with bde3d9c9aaee4e8258fb39f5c0a3e22f3839b11608811e64d4f8cb990ed089da not found: ID does not exist" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.623386 4861 scope.go:117] "RemoveContainer" containerID="deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e" Mar 10 19:11:14 crc kubenswrapper[4861]: E0310 19:11:14.623725 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e\": container with ID starting with deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e not found: ID does not exist" containerID="deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.623804 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e"} err="failed to get container status \"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e\": rpc error: code = NotFound desc = could not find container \"deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e\": container with ID starting with deccce97179e323d6fd0eada256f42bb3519a8c6edd6ba2f5ea1ad56fc05ee2e not found: ID does not exist" Mar 10 19:11:14 crc kubenswrapper[4861]: I0310 19:11:14.971699 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba01933c-1abf-473e-b55a-8e9ec135d938" path="/var/lib/kubelet/pods/ba01933c-1abf-473e-b55a-8e9ec135d938/volumes" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.074782 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191076 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191202 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191236 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191333 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5phdd\" (UniqueName: \"kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191400 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts\") pod \"5c22ba49-46f8-4ad7-9f90-b45e34943385\" (UID: \"5c22ba49-46f8-4ad7-9f90-b45e34943385\") " Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191516 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191768 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.191942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.196521 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts" (OuterVolumeSpecName: "scripts") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.203965 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd" (OuterVolumeSpecName: "kube-api-access-5phdd") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "kube-api-access-5phdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.231484 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.285840 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.292824 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c22ba49-46f8-4ad7-9f90-b45e34943385-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.292853 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.292863 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.292873 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5phdd\" (UniqueName: \"kubernetes.io/projected/5c22ba49-46f8-4ad7-9f90-b45e34943385-kube-api-access-5phdd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.292881 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.311973 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data" (OuterVolumeSpecName: "config-data") pod "5c22ba49-46f8-4ad7-9f90-b45e34943385" (UID: "5c22ba49-46f8-4ad7-9f90-b45e34943385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:15 crc kubenswrapper[4861]: W0310 19:11:15.361577 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97db979f_75cb_4e7e_9dc6_0c65f39fef8e.slice/crio-18012cb40d9deff9ef05b17c1356be10d9edaff1e34afacc9b7fa1a0adf864d8 WatchSource:0}: Error finding container 18012cb40d9deff9ef05b17c1356be10d9edaff1e34afacc9b7fa1a0adf864d8: Status 404 returned error can't find the container with id 18012cb40d9deff9ef05b17c1356be10d9edaff1e34afacc9b7fa1a0adf864d8 Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.366504 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.394866 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c22ba49-46f8-4ad7-9f90-b45e34943385-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.549149 4861 generic.go:334] "Generic (PLEG): container finished" podID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerID="f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9" exitCode=0 Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.549204 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.549254 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerDied","Data":"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9"} Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.549308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c22ba49-46f8-4ad7-9f90-b45e34943385","Type":"ContainerDied","Data":"72b3e0082e8933f7d616c91569e3308593d3fe224a50421d87344c88c56424d8"} Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.549328 4861 scope.go:117] "RemoveContainer" containerID="0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.554917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerStarted","Data":"18012cb40d9deff9ef05b17c1356be10d9edaff1e34afacc9b7fa1a0adf864d8"} Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.586337 4861 scope.go:117] "RemoveContainer" containerID="bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.605689 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.612150 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.623859 4861 scope.go:117] "RemoveContainer" containerID="b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.628676 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.629116 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-central-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629137 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-central-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.629150 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-notification-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629157 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-notification-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.629171 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="sg-core" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629176 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="sg-core" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.629209 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="proxy-httpd" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629215 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="proxy-httpd" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629380 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="sg-core" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629395 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="proxy-httpd" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629408 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-central-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.629419 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" containerName="ceilometer-notification-agent" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.630910 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.637923 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.639997 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.645350 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.679843 4861 scope.go:117] "RemoveContainer" containerID="f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701042 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701287 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701320 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701405 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701568 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq4c\" (UniqueName: \"kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.701601 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.797637 4861 scope.go:117] "RemoveContainer" containerID="0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.798603 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb\": container with ID starting with 0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb not found: ID does not exist" containerID="0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.798634 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb"} err="failed to get container status \"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb\": rpc error: code = NotFound desc = could not find container \"0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb\": container with ID starting with 0d8af9732d717f3b2f2740ecff2b4055c69ff220983492e83e754791ee8719cb not found: ID does not exist" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.798653 4861 scope.go:117] "RemoveContainer" containerID="bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.798950 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271\": container with ID starting with bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271 not found: ID does not exist" containerID="bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.798969 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271"} err="failed to get container status \"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271\": rpc error: code = NotFound desc = could not find container \"bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271\": container with ID starting with bcfd7a9775cb626dec759b4d8f9bb0679b6f6ee7c04d6de2cf109b38e5bfb271 not found: ID does not exist" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.798981 4861 scope.go:117] "RemoveContainer" containerID="b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.799170 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8\": container with ID starting with b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8 not found: ID does not exist" containerID="b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.799187 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8"} err="failed to get container status \"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8\": rpc error: code = NotFound desc = could not find container \"b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8\": container with ID starting with b08c35add2fcde02a1b109d63548d7ca5cdcd7e1ad29bd50b3f3f1b4d347d7b8 not found: ID does not exist" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.799201 4861 scope.go:117] "RemoveContainer" containerID="f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9" Mar 10 19:11:15 crc kubenswrapper[4861]: E0310 19:11:15.799563 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9\": container with ID starting with f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9 not found: ID does not exist" containerID="f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.799585 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9"} err="failed to get container status \"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9\": rpc error: code = NotFound desc = could not find container \"f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9\": container with ID starting with f30d7f7a82df6bcce5711088c0cf1dd81e35b7439df6ba3a975a006e084aebc9 not found: ID does not exist" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.802915 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.802968 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803022 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803038 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803081 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803132 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdq4c\" (UniqueName: \"kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803151 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.803684 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.804244 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.808141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.810357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.810857 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.813068 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.818922 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdq4c\" (UniqueName: \"kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c\") pod \"ceilometer-0\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " pod="openstack/ceilometer-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.919974 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 10 19:11:15 crc kubenswrapper[4861]: I0310 19:11:15.969641 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.427721 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:16 crc kubenswrapper[4861]: W0310 19:11:16.433086 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9bd9df_bdab_4b30_abb5_0efa09510371.slice/crio-98a64fd21fbb5432473951799492fbb4ca2f558337ce489be2ea48c31f3217c5 WatchSource:0}: Error finding container 98a64fd21fbb5432473951799492fbb4ca2f558337ce489be2ea48c31f3217c5: Status 404 returned error can't find the container with id 98a64fd21fbb5432473951799492fbb4ca2f558337ce489be2ea48c31f3217c5 Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.572307 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerStarted","Data":"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010"} Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.572369 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerStarted","Data":"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd"} Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.573353 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.573414 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.575499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerStarted","Data":"98a64fd21fbb5432473951799492fbb4ca2f558337ce489be2ea48c31f3217c5"} Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.595224 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" podStartSLOduration=2.5952036979999997 podStartE2EDuration="2.595203698s" podCreationTimestamp="2026-03-10 19:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:16.594471628 +0000 UTC m=+1420.357907588" watchObservedRunningTime="2026-03-10 19:11:16.595203698 +0000 UTC m=+1420.358639658" Mar 10 19:11:16 crc kubenswrapper[4861]: I0310 19:11:16.973049 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c22ba49-46f8-4ad7-9f90-b45e34943385" path="/var/lib/kubelet/pods/5c22ba49-46f8-4ad7-9f90-b45e34943385/volumes" Mar 10 19:11:17 crc kubenswrapper[4861]: I0310 19:11:17.593976 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerStarted","Data":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} Mar 10 19:11:18 crc kubenswrapper[4861]: I0310 19:11:18.620783 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerStarted","Data":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} Mar 10 19:11:19 crc kubenswrapper[4861]: I0310 19:11:19.635378 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerStarted","Data":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} Mar 10 19:11:23 crc kubenswrapper[4861]: I0310 19:11:23.039751 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:24 crc kubenswrapper[4861]: I0310 19:11:24.624148 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:24 crc kubenswrapper[4861]: I0310 19:11:24.625394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.709181 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9007f85d-dd41-49ed-9a6f-c2b09b26fad2","Type":"ContainerStarted","Data":"94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b"} Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712082 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerStarted","Data":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712216 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-central-agent" containerID="cri-o://a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" gracePeriod=30 Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712256 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712242 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="sg-core" containerID="cri-o://a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" gracePeriod=30 Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712285 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="proxy-httpd" containerID="cri-o://22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" gracePeriod=30 Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.712326 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-notification-agent" containerID="cri-o://04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" gracePeriod=30 Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.739628 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.278017373 podStartE2EDuration="12.739612s" podCreationTimestamp="2026-03-10 19:11:13 +0000 UTC" firstStartedPulling="2026-03-10 19:11:14.339550189 +0000 UTC m=+1418.102986139" lastFinishedPulling="2026-03-10 19:11:24.801144806 +0000 UTC m=+1428.564580766" observedRunningTime="2026-03-10 19:11:25.735792773 +0000 UTC m=+1429.499228743" watchObservedRunningTime="2026-03-10 19:11:25.739612 +0000 UTC m=+1429.503047960" Mar 10 19:11:25 crc kubenswrapper[4861]: I0310 19:11:25.786636 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.463570465 podStartE2EDuration="10.786608075s" podCreationTimestamp="2026-03-10 19:11:15 +0000 UTC" firstStartedPulling="2026-03-10 19:11:16.435514392 +0000 UTC m=+1420.198950352" lastFinishedPulling="2026-03-10 19:11:24.758552002 +0000 UTC m=+1428.521987962" observedRunningTime="2026-03-10 19:11:25.776913716 +0000 UTC m=+1429.540349676" watchObservedRunningTime="2026-03-10 19:11:25.786608075 +0000 UTC m=+1429.550044035" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.625128 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.723514 4861 generic.go:334] "Generic (PLEG): container finished" podID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" exitCode=0 Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724105 4861 generic.go:334] "Generic (PLEG): container finished" podID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" exitCode=2 Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724166 4861 generic.go:334] "Generic (PLEG): container finished" podID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" exitCode=0 Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724250 4861 generic.go:334] "Generic (PLEG): container finished" podID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" exitCode=0 Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.723577 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.723562 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerDied","Data":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerDied","Data":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724892 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerDied","Data":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724902 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerDied","Data":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724913 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a9bd9df-bdab-4b30-abb5-0efa09510371","Type":"ContainerDied","Data":"98a64fd21fbb5432473951799492fbb4ca2f558337ce489be2ea48c31f3217c5"} Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.724929 4861 scope.go:117] "RemoveContainer" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.746880 4861 scope.go:117] "RemoveContainer" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.780017 4861 scope.go:117] "RemoveContainer" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.802282 4861 scope.go:117] "RemoveContainer" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.814865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815375 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815544 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815568 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815621 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdq4c\" (UniqueName: \"kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.815683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle\") pod \"4a9bd9df-bdab-4b30-abb5-0efa09510371\" (UID: \"4a9bd9df-bdab-4b30-abb5-0efa09510371\") " Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.816090 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.816637 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.816987 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.817017 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a9bd9df-bdab-4b30-abb5-0efa09510371-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.821735 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c" (OuterVolumeSpecName: "kube-api-access-zdq4c") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "kube-api-access-zdq4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.825356 4861 scope.go:117] "RemoveContainer" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.827772 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts" (OuterVolumeSpecName: "scripts") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: E0310 19:11:26.829939 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": container with ID starting with 22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be not found: ID does not exist" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.829988 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} err="failed to get container status \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": rpc error: code = NotFound desc = could not find container \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": container with ID starting with 22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.830017 4861 scope.go:117] "RemoveContainer" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: E0310 19:11:26.838081 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": container with ID starting with a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f not found: ID does not exist" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.838132 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} err="failed to get container status \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": rpc error: code = NotFound desc = could not find container \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": container with ID starting with a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.838160 4861 scope.go:117] "RemoveContainer" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: E0310 19:11:26.840059 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": container with ID starting with 04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e not found: ID does not exist" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.840095 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} err="failed to get container status \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": rpc error: code = NotFound desc = could not find container \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": container with ID starting with 04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.840117 4861 scope.go:117] "RemoveContainer" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: E0310 19:11:26.840689 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": container with ID starting with a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba not found: ID does not exist" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.840743 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} err="failed to get container status \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": rpc error: code = NotFound desc = could not find container \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": container with ID starting with a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.840765 4861 scope.go:117] "RemoveContainer" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.841033 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} err="failed to get container status \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": rpc error: code = NotFound desc = could not find container \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": container with ID starting with 22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.841075 4861 scope.go:117] "RemoveContainer" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.844497 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} err="failed to get container status \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": rpc error: code = NotFound desc = could not find container \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": container with ID starting with a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.844665 4861 scope.go:117] "RemoveContainer" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845026 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} err="failed to get container status \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": rpc error: code = NotFound desc = could not find container \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": container with ID starting with 04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845055 4861 scope.go:117] "RemoveContainer" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845262 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} err="failed to get container status \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": rpc error: code = NotFound desc = could not find container \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": container with ID starting with a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845282 4861 scope.go:117] "RemoveContainer" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845524 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} err="failed to get container status \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": rpc error: code = NotFound desc = could not find container \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": container with ID starting with 22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845545 4861 scope.go:117] "RemoveContainer" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845873 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} err="failed to get container status \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": rpc error: code = NotFound desc = could not find container \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": container with ID starting with a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.845897 4861 scope.go:117] "RemoveContainer" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.846119 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} err="failed to get container status \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": rpc error: code = NotFound desc = could not find container \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": container with ID starting with 04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.846144 4861 scope.go:117] "RemoveContainer" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.847418 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} err="failed to get container status \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": rpc error: code = NotFound desc = could not find container \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": container with ID starting with a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.847497 4861 scope.go:117] "RemoveContainer" containerID="22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.850549 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be"} err="failed to get container status \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": rpc error: code = NotFound desc = could not find container \"22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be\": container with ID starting with 22588b95fee5d4f7b28dd5b2599fdc301ba396471486ad317a80370d085544be not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.850585 4861 scope.go:117] "RemoveContainer" containerID="a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.851932 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f"} err="failed to get container status \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": rpc error: code = NotFound desc = could not find container \"a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f\": container with ID starting with a61691316bae696fb3a025761ac31d630e0ff82d7cfe8251ffb820110707032f not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.851959 4861 scope.go:117] "RemoveContainer" containerID="04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.854442 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e"} err="failed to get container status \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": rpc error: code = NotFound desc = could not find container \"04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e\": container with ID starting with 04097786f8dbaf5bcb11b9caa68e354e38f5b0fc8ff57b3dffd0700f3cae134e not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.854468 4861 scope.go:117] "RemoveContainer" containerID="a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.854698 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba"} err="failed to get container status \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": rpc error: code = NotFound desc = could not find container \"a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba\": container with ID starting with a5bee14423141f58dbeb7401970c035f09dba86f4e58d43bb28f34828987c2ba not found: ID does not exist" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.857989 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.898949 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.924497 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data" (OuterVolumeSpecName: "config-data") pod "4a9bd9df-bdab-4b30-abb5-0efa09510371" (UID: "4a9bd9df-bdab-4b30-abb5-0efa09510371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.933788 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.933813 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.933825 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdq4c\" (UniqueName: \"kubernetes.io/projected/4a9bd9df-bdab-4b30-abb5-0efa09510371-kube-api-access-zdq4c\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:26 crc kubenswrapper[4861]: I0310 19:11:26.933835 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.035302 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a9bd9df-bdab-4b30-abb5-0efa09510371-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.051660 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.059417 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.071663 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:27 crc kubenswrapper[4861]: E0310 19:11:27.072197 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="proxy-httpd" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.072290 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="proxy-httpd" Mar 10 19:11:27 crc kubenswrapper[4861]: E0310 19:11:27.072355 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="sg-core" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.072411 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="sg-core" Mar 10 19:11:27 crc kubenswrapper[4861]: E0310 19:11:27.072462 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-notification-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073105 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-notification-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: E0310 19:11:27.073181 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-central-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073235 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-central-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073458 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-central-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073519 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="proxy-httpd" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073592 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="sg-core" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.073649 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" containerName="ceilometer-notification-agent" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.075188 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.081490 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.081646 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.099003 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.136975 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137040 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137113 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137147 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zftz6\" (UniqueName: \"kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137267 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.137325 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239195 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239291 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239326 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239364 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239410 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zftz6\" (UniqueName: \"kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239446 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.239478 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.240021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.240770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.243780 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.244082 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.247828 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.252422 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.260049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zftz6\" (UniqueName: \"kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6\") pod \"ceilometer-0\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.395381 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.650525 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:27 crc kubenswrapper[4861]: W0310 19:11:27.655397 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2448498_19da_4b93_b300_6425cb5ab4ba.slice/crio-dff721dd6e68d0640a8103bc6f3a756bdb032b4fb5427a885aca6b2ffa043905 WatchSource:0}: Error finding container dff721dd6e68d0640a8103bc6f3a756bdb032b4fb5427a885aca6b2ffa043905: Status 404 returned error can't find the container with id dff721dd6e68d0640a8103bc6f3a756bdb032b4fb5427a885aca6b2ffa043905 Mar 10 19:11:27 crc kubenswrapper[4861]: I0310 19:11:27.734583 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerStarted","Data":"dff721dd6e68d0640a8103bc6f3a756bdb032b4fb5427a885aca6b2ffa043905"} Mar 10 19:11:28 crc kubenswrapper[4861]: I0310 19:11:28.745690 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerStarted","Data":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} Mar 10 19:11:28 crc kubenswrapper[4861]: I0310 19:11:28.981704 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9bd9df-bdab-4b30-abb5-0efa09510371" path="/var/lib/kubelet/pods/4a9bd9df-bdab-4b30-abb5-0efa09510371/volumes" Mar 10 19:11:29 crc kubenswrapper[4861]: I0310 19:11:29.668743 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:11:29 crc kubenswrapper[4861]: I0310 19:11:29.728244 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:11:29 crc kubenswrapper[4861]: I0310 19:11:29.729011 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b4d7bd5c6-xts5s" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-api" containerID="cri-o://f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d" gracePeriod=30 Mar 10 19:11:29 crc kubenswrapper[4861]: I0310 19:11:29.729235 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b4d7bd5c6-xts5s" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-httpd" containerID="cri-o://3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5" gracePeriod=30 Mar 10 19:11:29 crc kubenswrapper[4861]: I0310 19:11:29.757155 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerStarted","Data":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.418505 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9lzhz"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.420228 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.436070 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9lzhz"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.502255 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.502821 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6vz\" (UniqueName: \"kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.520001 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-52e5-account-create-update-h7sd5"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.521207 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.529048 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.529998 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-52e5-account-create-update-h7sd5"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.604250 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6vz\" (UniqueName: \"kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.604361 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.604395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffj57\" (UniqueName: \"kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.604452 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.605132 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.626939 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-km8tn"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.627917 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.670982 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-km8tn"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.671933 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6vz\" (UniqueName: \"kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz\") pod \"nova-api-db-create-9lzhz\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.727313 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6l4\" (UniqueName: \"kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.727585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.727736 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.727834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffj57\" (UniqueName: \"kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.728602 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.734786 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.775769 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffj57\" (UniqueName: \"kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57\") pod \"nova-api-52e5-account-create-update-h7sd5\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.811781 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-jgz8n"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.812934 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.819204 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.831641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.831776 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl6l4\" (UniqueName: \"kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.831800 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc8g\" (UniqueName: \"kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.831839 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.832427 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.833724 4861 generic.go:334] "Generic (PLEG): container finished" podID="42290f46-99bb-4386-a400-44483968dc69" containerID="3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5" exitCode=0 Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.833769 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerDied","Data":"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5"} Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.836141 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.845182 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gnw7n"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.846258 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.850912 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl6l4\" (UniqueName: \"kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4\") pod \"nova-cell0-db-create-km8tn\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.851188 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-jgz8n"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.855376 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerStarted","Data":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.864626 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gnw7n"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.944872 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.945740 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529cs\" (UniqueName: \"kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.945947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc8g\" (UniqueName: \"kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.946062 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.946173 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.948440 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f866-account-create-update-4d26l"] Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.949590 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.951012 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.953898 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 10 19:11:30 crc kubenswrapper[4861]: I0310 19:11:30.972176 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc8g\" (UniqueName: \"kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g\") pod \"nova-cell0-cf7a-account-create-update-jgz8n\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.020995 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-4d26l"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.060979 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.061108 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529cs\" (UniqueName: \"kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.061227 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.061342 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbl4\" (UniqueName: \"kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.063149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.105691 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529cs\" (UniqueName: \"kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs\") pod \"nova-cell1-db-create-gnw7n\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.162545 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbl4\" (UniqueName: \"kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.162822 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.163498 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.180154 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbl4\" (UniqueName: \"kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4\") pod \"nova-cell1-f866-account-create-update-4d26l\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.228689 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.336229 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9lzhz"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.379121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.404545 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-52e5-account-create-update-h7sd5"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.448199 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.538351 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-km8tn"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.720569 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-jgz8n"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.865040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9lzhz" event={"ID":"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5","Type":"ContainerStarted","Data":"69864ee6a4aa226b8bfa33bff3965bcd0f4bc2c9e7031e9ccbe3edb3aaf4f16c"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.865319 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9lzhz" event={"ID":"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5","Type":"ContainerStarted","Data":"da2aac76beab0b3c13b8477ed6802aad35c62959a05583d1fc4ddfee0e9eac23"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.867283 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" event={"ID":"c1597b9b-8273-4a5c-8f46-8a18587e059f","Type":"ContainerStarted","Data":"98c0e120c46dc22e044f53efbb0a75a636fd0d6adcc82d5647e2f770fd0a5f81"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.871465 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-km8tn" event={"ID":"ae63cfad-11fb-40ca-955e-2c445948a50c","Type":"ContainerStarted","Data":"74bcec21c6be8fd3343a57d71b65947c5bb8baf435a76f166a4bc0727284b9e3"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.876514 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-h7sd5" event={"ID":"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0","Type":"ContainerStarted","Data":"6db1199a270e11a4fd456859f2f886658787019bc3921d6921358e8c10d83292"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.876544 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-h7sd5" event={"ID":"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0","Type":"ContainerStarted","Data":"c26a0aa6a71977e7cc70753ba230a211a7d4b1076283f00f2b7c700cd38de3d0"} Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.885761 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9lzhz" podStartSLOduration=1.8857476389999999 podStartE2EDuration="1.885747639s" podCreationTimestamp="2026-03-10 19:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:31.879941027 +0000 UTC m=+1435.643376987" watchObservedRunningTime="2026-03-10 19:11:31.885747639 +0000 UTC m=+1435.649183599" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.902267 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-52e5-account-create-update-h7sd5" podStartSLOduration=1.902245307 podStartE2EDuration="1.902245307s" podCreationTimestamp="2026-03-10 19:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:31.900006035 +0000 UTC m=+1435.663441995" watchObservedRunningTime="2026-03-10 19:11:31.902245307 +0000 UTC m=+1435.665681267" Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.951440 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gnw7n"] Mar 10 19:11:31 crc kubenswrapper[4861]: I0310 19:11:31.975155 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-4d26l"] Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.888330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerStarted","Data":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.888651 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.889559 4861 generic.go:334] "Generic (PLEG): container finished" podID="a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" containerID="69864ee6a4aa226b8bfa33bff3965bcd0f4bc2c9e7031e9ccbe3edb3aaf4f16c" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.889610 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9lzhz" event={"ID":"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5","Type":"ContainerDied","Data":"69864ee6a4aa226b8bfa33bff3965bcd0f4bc2c9e7031e9ccbe3edb3aaf4f16c"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.891129 4861 generic.go:334] "Generic (PLEG): container finished" podID="c1597b9b-8273-4a5c-8f46-8a18587e059f" containerID="c8036046abc5482ee5c300c86709379ab79593dc384da644048703e8766aed37" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.891182 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" event={"ID":"c1597b9b-8273-4a5c-8f46-8a18587e059f","Type":"ContainerDied","Data":"c8036046abc5482ee5c300c86709379ab79593dc384da644048703e8766aed37"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.893860 4861 generic.go:334] "Generic (PLEG): container finished" podID="ae63cfad-11fb-40ca-955e-2c445948a50c" containerID="5cc8655b0aee62791274d55957ff7a54178e2f1a83cdb5b1207033c268396b15" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.893912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-km8tn" event={"ID":"ae63cfad-11fb-40ca-955e-2c445948a50c","Type":"ContainerDied","Data":"5cc8655b0aee62791274d55957ff7a54178e2f1a83cdb5b1207033c268396b15"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.896074 4861 generic.go:334] "Generic (PLEG): container finished" podID="63d6fcdc-e673-4207-8da8-3c3f3681bcaf" containerID="689681124f00ab24f83edf4e74e95e43f454ab1641998fc54dafd3be3c70f0ae" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.896119 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnw7n" event={"ID":"63d6fcdc-e673-4207-8da8-3c3f3681bcaf","Type":"ContainerDied","Data":"689681124f00ab24f83edf4e74e95e43f454ab1641998fc54dafd3be3c70f0ae"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.896134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnw7n" event={"ID":"63d6fcdc-e673-4207-8da8-3c3f3681bcaf","Type":"ContainerStarted","Data":"f56fab82b053ca1b074ad268c41f3f280addcebcfdc6ff3404778deebe018d47"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.897947 4861 generic.go:334] "Generic (PLEG): container finished" podID="1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" containerID="6db1199a270e11a4fd456859f2f886658787019bc3921d6921358e8c10d83292" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.897986 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-h7sd5" event={"ID":"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0","Type":"ContainerDied","Data":"6db1199a270e11a4fd456859f2f886658787019bc3921d6921358e8c10d83292"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.902544 4861 generic.go:334] "Generic (PLEG): container finished" podID="ee43121f-d7e2-4770-81c4-833fbbe0373f" containerID="cf206f80b2a9fbb6741e9b611d3178739258c140f0f5e9fd2dbecf804b0f2642" exitCode=0 Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.902577 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f866-account-create-update-4d26l" event={"ID":"ee43121f-d7e2-4770-81c4-833fbbe0373f","Type":"ContainerDied","Data":"cf206f80b2a9fbb6741e9b611d3178739258c140f0f5e9fd2dbecf804b0f2642"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.902595 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f866-account-create-update-4d26l" event={"ID":"ee43121f-d7e2-4770-81c4-833fbbe0373f","Type":"ContainerStarted","Data":"508c64cb2d78e4279a30fa07ee7c227abe3e4cc6ccba7abf5005be7e94ddb8e0"} Mar 10 19:11:32 crc kubenswrapper[4861]: I0310 19:11:32.909331 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.890755998 podStartE2EDuration="5.909315937s" podCreationTimestamp="2026-03-10 19:11:27 +0000 UTC" firstStartedPulling="2026-03-10 19:11:27.657998338 +0000 UTC m=+1431.421434298" lastFinishedPulling="2026-03-10 19:11:31.676558277 +0000 UTC m=+1435.439994237" observedRunningTime="2026-03-10 19:11:32.904992466 +0000 UTC m=+1436.668428446" watchObservedRunningTime="2026-03-10 19:11:32.909315937 +0000 UTC m=+1436.672751897" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.438079 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.532519 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts\") pod \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.532988 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-529cs\" (UniqueName: \"kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs\") pod \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\" (UID: \"63d6fcdc-e673-4207-8da8-3c3f3681bcaf\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.533460 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63d6fcdc-e673-4207-8da8-3c3f3681bcaf" (UID: "63d6fcdc-e673-4207-8da8-3c3f3681bcaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.544013 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs" (OuterVolumeSpecName: "kube-api-access-529cs") pod "63d6fcdc-e673-4207-8da8-3c3f3681bcaf" (UID: "63d6fcdc-e673-4207-8da8-3c3f3681bcaf"). InnerVolumeSpecName "kube-api-access-529cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.620381 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.624664 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.628662 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.635123 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.635316 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.635330 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-529cs\" (UniqueName: \"kubernetes.io/projected/63d6fcdc-e673-4207-8da8-3c3f3681bcaf-kube-api-access-529cs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.639509 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738255 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts\") pod \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rc8g\" (UniqueName: \"kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g\") pod \"c1597b9b-8273-4a5c-8f46-8a18587e059f\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738390 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts\") pod \"ee43121f-d7e2-4770-81c4-833fbbe0373f\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738413 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffj57\" (UniqueName: \"kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57\") pod \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\" (UID: \"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738452 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts\") pod \"c1597b9b-8273-4a5c-8f46-8a18587e059f\" (UID: \"c1597b9b-8273-4a5c-8f46-8a18587e059f\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738470 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts\") pod \"ae63cfad-11fb-40ca-955e-2c445948a50c\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738518 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl6l4\" (UniqueName: \"kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4\") pod \"ae63cfad-11fb-40ca-955e-2c445948a50c\" (UID: \"ae63cfad-11fb-40ca-955e-2c445948a50c\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738536 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgbl4\" (UniqueName: \"kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4\") pod \"ee43121f-d7e2-4770-81c4-833fbbe0373f\" (UID: \"ee43121f-d7e2-4770-81c4-833fbbe0373f\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738684 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts\") pod \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.738746 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6vz\" (UniqueName: \"kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz\") pod \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\" (UID: \"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5\") " Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.741644 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" (UID: "1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.741924 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1597b9b-8273-4a5c-8f46-8a18587e059f" (UID: "c1597b9b-8273-4a5c-8f46-8a18587e059f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.742194 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee43121f-d7e2-4770-81c4-833fbbe0373f" (UID: "ee43121f-d7e2-4770-81c4-833fbbe0373f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.744121 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g" (OuterVolumeSpecName: "kube-api-access-7rc8g") pod "c1597b9b-8273-4a5c-8f46-8a18587e059f" (UID: "c1597b9b-8273-4a5c-8f46-8a18587e059f"). InnerVolumeSpecName "kube-api-access-7rc8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.744746 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae63cfad-11fb-40ca-955e-2c445948a50c" (UID: "ae63cfad-11fb-40ca-955e-2c445948a50c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.744856 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz" (OuterVolumeSpecName: "kube-api-access-hc6vz") pod "a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" (UID: "a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5"). InnerVolumeSpecName "kube-api-access-hc6vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.745046 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4" (OuterVolumeSpecName: "kube-api-access-cgbl4") pod "ee43121f-d7e2-4770-81c4-833fbbe0373f" (UID: "ee43121f-d7e2-4770-81c4-833fbbe0373f"). InnerVolumeSpecName "kube-api-access-cgbl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.745259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" (UID: "a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.749768 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4" (OuterVolumeSpecName: "kube-api-access-sl6l4") pod "ae63cfad-11fb-40ca-955e-2c445948a50c" (UID: "ae63cfad-11fb-40ca-955e-2c445948a50c"). InnerVolumeSpecName "kube-api-access-sl6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.749843 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57" (OuterVolumeSpecName: "kube-api-access-ffj57") pod "1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" (UID: "1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0"). InnerVolumeSpecName "kube-api-access-ffj57". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842046 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee43121f-d7e2-4770-81c4-833fbbe0373f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842105 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffj57\" (UniqueName: \"kubernetes.io/projected/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-kube-api-access-ffj57\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842120 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1597b9b-8273-4a5c-8f46-8a18587e059f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842132 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae63cfad-11fb-40ca-955e-2c445948a50c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842144 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl6l4\" (UniqueName: \"kubernetes.io/projected/ae63cfad-11fb-40ca-955e-2c445948a50c-kube-api-access-sl6l4\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842155 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgbl4\" (UniqueName: \"kubernetes.io/projected/ee43121f-d7e2-4770-81c4-833fbbe0373f-kube-api-access-cgbl4\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842166 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842177 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6vz\" (UniqueName: \"kubernetes.io/projected/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5-kube-api-access-hc6vz\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842190 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.842201 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rc8g\" (UniqueName: \"kubernetes.io/projected/c1597b9b-8273-4a5c-8f46-8a18587e059f-kube-api-access-7rc8g\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.918604 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-km8tn" event={"ID":"ae63cfad-11fb-40ca-955e-2c445948a50c","Type":"ContainerDied","Data":"74bcec21c6be8fd3343a57d71b65947c5bb8baf435a76f166a4bc0727284b9e3"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.918674 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74bcec21c6be8fd3343a57d71b65947c5bb8baf435a76f166a4bc0727284b9e3" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.918630 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-km8tn" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.921238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gnw7n" event={"ID":"63d6fcdc-e673-4207-8da8-3c3f3681bcaf","Type":"ContainerDied","Data":"f56fab82b053ca1b074ad268c41f3f280addcebcfdc6ff3404778deebe018d47"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.921285 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56fab82b053ca1b074ad268c41f3f280addcebcfdc6ff3404778deebe018d47" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.921350 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gnw7n" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.928210 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-h7sd5" event={"ID":"1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0","Type":"ContainerDied","Data":"c26a0aa6a71977e7cc70753ba230a211a7d4b1076283f00f2b7c700cd38de3d0"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.928270 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26a0aa6a71977e7cc70753ba230a211a7d4b1076283f00f2b7c700cd38de3d0" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.928360 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-h7sd5" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.943983 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f866-account-create-update-4d26l" event={"ID":"ee43121f-d7e2-4770-81c4-833fbbe0373f","Type":"ContainerDied","Data":"508c64cb2d78e4279a30fa07ee7c227abe3e4cc6ccba7abf5005be7e94ddb8e0"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.944035 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508c64cb2d78e4279a30fa07ee7c227abe3e4cc6ccba7abf5005be7e94ddb8e0" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.944100 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-4d26l" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.949169 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9lzhz" event={"ID":"a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5","Type":"ContainerDied","Data":"da2aac76beab0b3c13b8477ed6802aad35c62959a05583d1fc4ddfee0e9eac23"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.949196 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2aac76beab0b3c13b8477ed6802aad35c62959a05583d1fc4ddfee0e9eac23" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.949255 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9lzhz" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.951837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" event={"ID":"c1597b9b-8273-4a5c-8f46-8a18587e059f","Type":"ContainerDied","Data":"98c0e120c46dc22e044f53efbb0a75a636fd0d6adcc82d5647e2f770fd0a5f81"} Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.951879 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c0e120c46dc22e044f53efbb0a75a636fd0d6adcc82d5647e2f770fd0a5f81" Mar 10 19:11:34 crc kubenswrapper[4861]: I0310 19:11:34.951955 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-jgz8n" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.507360 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.559277 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config\") pod \"42290f46-99bb-4386-a400-44483968dc69\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.559424 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle\") pod \"42290f46-99bb-4386-a400-44483968dc69\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.559919 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config\") pod \"42290f46-99bb-4386-a400-44483968dc69\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.560066 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs\") pod \"42290f46-99bb-4386-a400-44483968dc69\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.560097 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvtwn\" (UniqueName: \"kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn\") pod \"42290f46-99bb-4386-a400-44483968dc69\" (UID: \"42290f46-99bb-4386-a400-44483968dc69\") " Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.574417 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "42290f46-99bb-4386-a400-44483968dc69" (UID: "42290f46-99bb-4386-a400-44483968dc69"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.575385 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn" (OuterVolumeSpecName: "kube-api-access-nvtwn") pod "42290f46-99bb-4386-a400-44483968dc69" (UID: "42290f46-99bb-4386-a400-44483968dc69"). InnerVolumeSpecName "kube-api-access-nvtwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.606002 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.606407 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-log" containerID="cri-o://eba307c015ebf3b6494ca54b629a1ceb28eb5186989480dc651d2c2ed68bfb9e" gracePeriod=30 Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.606602 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-httpd" containerID="cri-o://8724e9f95caf76f2229af962d390f418057ba1ded4a3ff3dce212acea3927f0b" gracePeriod=30 Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.646414 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42290f46-99bb-4386-a400-44483968dc69" (UID: "42290f46-99bb-4386-a400-44483968dc69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.649405 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config" (OuterVolumeSpecName: "config") pod "42290f46-99bb-4386-a400-44483968dc69" (UID: "42290f46-99bb-4386-a400-44483968dc69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.659195 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "42290f46-99bb-4386-a400-44483968dc69" (UID: "42290f46-99bb-4386-a400-44483968dc69"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.661629 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.661659 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvtwn\" (UniqueName: \"kubernetes.io/projected/42290f46-99bb-4386-a400-44483968dc69-kube-api-access-nvtwn\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.661671 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.661681 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.661689 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42290f46-99bb-4386-a400-44483968dc69-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.967457 4861 generic.go:334] "Generic (PLEG): container finished" podID="42290f46-99bb-4386-a400-44483968dc69" containerID="f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d" exitCode=0 Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.967533 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerDied","Data":"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d"} Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.967568 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4d7bd5c6-xts5s" event={"ID":"42290f46-99bb-4386-a400-44483968dc69","Type":"ContainerDied","Data":"bd944e9749e0b31b1a6b3ec341d4edc3efd740775779c4803788aa2ba919701f"} Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.967592 4861 scope.go:117] "RemoveContainer" containerID="3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.967645 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4d7bd5c6-xts5s" Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.976043 4861 generic.go:334] "Generic (PLEG): container finished" podID="c16ced7d-2645-42db-abc8-266267b6de4c" containerID="eba307c015ebf3b6494ca54b629a1ceb28eb5186989480dc651d2c2ed68bfb9e" exitCode=143 Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.976232 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerDied","Data":"eba307c015ebf3b6494ca54b629a1ceb28eb5186989480dc651d2c2ed68bfb9e"} Mar 10 19:11:35 crc kubenswrapper[4861]: I0310 19:11:35.997147 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.003625 4861 scope.go:117] "RemoveContainer" containerID="f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.004607 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b4d7bd5c6-xts5s"] Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.020579 4861 scope.go:117] "RemoveContainer" containerID="3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.021045 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5\": container with ID starting with 3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5 not found: ID does not exist" containerID="3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.021089 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5"} err="failed to get container status \"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5\": rpc error: code = NotFound desc = could not find container \"3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5\": container with ID starting with 3849223cfd7426542d297e2e79b208d945c05f157603004e0f0eeccb1471b9d5 not found: ID does not exist" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.021115 4861 scope.go:117] "RemoveContainer" containerID="f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.021476 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d\": container with ID starting with f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d not found: ID does not exist" containerID="f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.021560 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d"} err="failed to get container status \"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d\": rpc error: code = NotFound desc = could not find container \"f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d\": container with ID starting with f323d09ee12080395149a91a2306c31acfef39ef7680d3a9e0baf22599acdd4d not found: ID does not exist" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.139899 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m77bt"] Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.140617 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d6fcdc-e673-4207-8da8-3c3f3681bcaf" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.140687 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d6fcdc-e673-4207-8da8-3c3f3681bcaf" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.140766 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae63cfad-11fb-40ca-955e-2c445948a50c" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.140820 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae63cfad-11fb-40ca-955e-2c445948a50c" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.140874 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.140924 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.140990 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-api" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141040 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-api" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.141096 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-httpd" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141149 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-httpd" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.141210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1597b9b-8273-4a5c-8f46-8a18587e059f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141264 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1597b9b-8273-4a5c-8f46-8a18587e059f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.141317 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141365 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: E0310 19:11:36.141423 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee43121f-d7e2-4770-81c4-833fbbe0373f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141477 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee43121f-d7e2-4770-81c4-833fbbe0373f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141689 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d6fcdc-e673-4207-8da8-3c3f3681bcaf" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141765 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141836 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee43121f-d7e2-4770-81c4-833fbbe0373f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141888 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-httpd" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.141938 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1597b9b-8273-4a5c-8f46-8a18587e059f" containerName="mariadb-account-create-update" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.142033 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae63cfad-11fb-40ca-955e-2c445948a50c" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.142087 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="42290f46-99bb-4386-a400-44483968dc69" containerName="neutron-api" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.142138 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" containerName="mariadb-database-create" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.142786 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.145001 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bgl7k" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.145178 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.145282 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.157827 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m77bt"] Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.170691 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.170998 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpvd\" (UniqueName: \"kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.171116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.171199 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.272589 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.272663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpvd\" (UniqueName: \"kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.272697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.272731 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.276520 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.276726 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.276936 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.286663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpvd\" (UniqueName: \"kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd\") pod \"nova-cell0-conductor-db-sync-m77bt\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.405025 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.405263 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-log" containerID="cri-o://050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2" gracePeriod=30 Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.405387 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-httpd" containerID="cri-o://b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0" gracePeriod=30 Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.501964 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:11:36 crc kubenswrapper[4861]: I0310 19:11:36.968746 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42290f46-99bb-4386-a400-44483968dc69" path="/var/lib/kubelet/pods/42290f46-99bb-4386-a400-44483968dc69/volumes" Mar 10 19:11:37 crc kubenswrapper[4861]: I0310 19:11:37.019335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m77bt"] Mar 10 19:11:37 crc kubenswrapper[4861]: I0310 19:11:37.019988 4861 generic.go:334] "Generic (PLEG): container finished" podID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerID="050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2" exitCode=143 Mar 10 19:11:37 crc kubenswrapper[4861]: I0310 19:11:37.020031 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerDied","Data":"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2"} Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.053421 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m77bt" event={"ID":"e61c59b6-f849-406f-8680-cb83de220b46","Type":"ContainerStarted","Data":"4603eaff9c36db9933e979814f624e402b22beb024bdc63c54e614558fe6a747"} Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.189393 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.189801 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-central-agent" containerID="cri-o://88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" gracePeriod=30 Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.189861 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="sg-core" containerID="cri-o://a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" gracePeriod=30 Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.189895 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="proxy-httpd" containerID="cri-o://837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" gracePeriod=30 Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.189967 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-notification-agent" containerID="cri-o://72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" gracePeriod=30 Mar 10 19:11:38 crc kubenswrapper[4861]: E0310 19:11:38.582275 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2448498_19da_4b93_b300_6425cb5ab4ba.slice/crio-88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2448498_19da_4b93_b300_6425cb5ab4ba.slice/crio-conmon-88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38.scope\": RecentStats: unable to find data in memory cache]" Mar 10 19:11:38 crc kubenswrapper[4861]: I0310 19:11:38.992307 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035294 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035389 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035416 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035443 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035540 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035615 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035607 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.035672 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zftz6\" (UniqueName: \"kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6\") pod \"b2448498-19da-4b93-b300-6425cb5ab4ba\" (UID: \"b2448498-19da-4b93-b300-6425cb5ab4ba\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.037510 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.038177 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.043143 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6" (OuterVolumeSpecName: "kube-api-access-zftz6") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "kube-api-access-zftz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.043308 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts" (OuterVolumeSpecName: "scripts") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.069419 4861 generic.go:334] "Generic (PLEG): container finished" podID="c16ced7d-2645-42db-abc8-266267b6de4c" containerID="8724e9f95caf76f2229af962d390f418057ba1ded4a3ff3dce212acea3927f0b" exitCode=0 Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.069492 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerDied","Data":"8724e9f95caf76f2229af962d390f418057ba1ded4a3ff3dce212acea3927f0b"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.069534 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071446 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" exitCode=0 Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071465 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" exitCode=2 Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071473 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" exitCode=0 Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071480 4861 generic.go:334] "Generic (PLEG): container finished" podID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" exitCode=0 Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerDied","Data":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071511 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerDied","Data":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071522 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerDied","Data":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071532 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerDied","Data":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2448498-19da-4b93-b300-6425cb5ab4ba","Type":"ContainerDied","Data":"dff721dd6e68d0640a8103bc6f3a756bdb032b4fb5427a885aca6b2ffa043905"} Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071558 4861 scope.go:117] "RemoveContainer" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.071564 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.098593 4861 scope.go:117] "RemoveContainer" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.138994 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.139021 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zftz6\" (UniqueName: \"kubernetes.io/projected/b2448498-19da-4b93-b300-6425cb5ab4ba-kube-api-access-zftz6\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.139034 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2448498-19da-4b93-b300-6425cb5ab4ba-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.139043 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.141349 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.145912 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data" (OuterVolumeSpecName: "config-data") pod "b2448498-19da-4b93-b300-6425cb5ab4ba" (UID: "b2448498-19da-4b93-b300-6425cb5ab4ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.146107 4861 scope.go:117] "RemoveContainer" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.168211 4861 scope.go:117] "RemoveContainer" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.198931 4861 scope.go:117] "RemoveContainer" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.202808 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": container with ID starting with 837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2 not found: ID does not exist" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.202844 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} err="failed to get container status \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": rpc error: code = NotFound desc = could not find container \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": container with ID starting with 837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.202865 4861 scope.go:117] "RemoveContainer" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.203371 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": container with ID starting with a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e not found: ID does not exist" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203392 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} err="failed to get container status \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": rpc error: code = NotFound desc = could not find container \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": container with ID starting with a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203404 4861 scope.go:117] "RemoveContainer" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.203573 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": container with ID starting with 72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e not found: ID does not exist" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203605 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} err="failed to get container status \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": rpc error: code = NotFound desc = could not find container \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": container with ID starting with 72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203618 4861 scope.go:117] "RemoveContainer" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.203929 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": container with ID starting with 88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38 not found: ID does not exist" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203950 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} err="failed to get container status \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": rpc error: code = NotFound desc = could not find container \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": container with ID starting with 88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.203964 4861 scope.go:117] "RemoveContainer" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206290 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} err="failed to get container status \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": rpc error: code = NotFound desc = could not find container \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": container with ID starting with 837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206311 4861 scope.go:117] "RemoveContainer" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206574 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} err="failed to get container status \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": rpc error: code = NotFound desc = could not find container \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": container with ID starting with a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206593 4861 scope.go:117] "RemoveContainer" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206945 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} err="failed to get container status \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": rpc error: code = NotFound desc = could not find container \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": container with ID starting with 72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.206966 4861 scope.go:117] "RemoveContainer" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.211099 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} err="failed to get container status \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": rpc error: code = NotFound desc = could not find container \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": container with ID starting with 88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.211156 4861 scope.go:117] "RemoveContainer" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.211444 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} err="failed to get container status \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": rpc error: code = NotFound desc = could not find container \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": container with ID starting with 837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.211465 4861 scope.go:117] "RemoveContainer" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.212241 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} err="failed to get container status \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": rpc error: code = NotFound desc = could not find container \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": container with ID starting with a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.212265 4861 scope.go:117] "RemoveContainer" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.212788 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} err="failed to get container status \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": rpc error: code = NotFound desc = could not find container \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": container with ID starting with 72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.212855 4861 scope.go:117] "RemoveContainer" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.213144 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} err="failed to get container status \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": rpc error: code = NotFound desc = could not find container \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": container with ID starting with 88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.213168 4861 scope.go:117] "RemoveContainer" containerID="837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.213297 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.213960 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2"} err="failed to get container status \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": rpc error: code = NotFound desc = could not find container \"837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2\": container with ID starting with 837125a213097c5b342b324689f08f275ab6288a559752ab66c9f23748a67aa2 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.214217 4861 scope.go:117] "RemoveContainer" containerID="a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.214895 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e"} err="failed to get container status \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": rpc error: code = NotFound desc = could not find container \"a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e\": container with ID starting with a8b79b77db20afaf55a06d0b62bde056648e46ac58419cd4f730ec593798738e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.214970 4861 scope.go:117] "RemoveContainer" containerID="72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.215217 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e"} err="failed to get container status \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": rpc error: code = NotFound desc = could not find container \"72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e\": container with ID starting with 72144776a5933ab643a9cd7cfe15a29727e1310f961270c51e378de6fef5ff8e not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.215234 4861 scope.go:117] "RemoveContainer" containerID="88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.215452 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38"} err="failed to get container status \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": rpc error: code = NotFound desc = could not find container \"88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38\": container with ID starting with 88329b5ed7548c29deb68822d31ee52afbeb7e3e3a50041ff68353978e0c6b38 not found: ID does not exist" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.240699 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.240739 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2448498-19da-4b93-b300-6425cb5ab4ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342265 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342314 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342361 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342403 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfp2\" (UniqueName: \"kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342470 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342583 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342616 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342650 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts\") pod \"c16ced7d-2645-42db-abc8-266267b6de4c\" (UID: \"c16ced7d-2645-42db-abc8-266267b6de4c\") " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.342890 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.343289 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs" (OuterVolumeSpecName: "logs") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.343396 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.347114 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts" (OuterVolumeSpecName: "scripts") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.347118 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.347169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2" (OuterVolumeSpecName: "kube-api-access-2gfp2") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "kube-api-access-2gfp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.370604 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.401988 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.404379 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data" (OuterVolumeSpecName: "config-data") pod "c16ced7d-2645-42db-abc8-266267b6de4c" (UID: "c16ced7d-2645-42db-abc8-266267b6de4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444698 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444776 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444791 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444803 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444811 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c16ced7d-2645-42db-abc8-266267b6de4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444820 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gfp2\" (UniqueName: \"kubernetes.io/projected/c16ced7d-2645-42db-abc8-266267b6de4c-kube-api-access-2gfp2\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.444828 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c16ced7d-2645-42db-abc8-266267b6de4c-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.466095 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.488608 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.511781 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.522573 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.522928 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-notification-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.522946 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-notification-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.522969 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.522980 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.523000 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-central-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523006 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-central-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.523015 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="sg-core" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523021 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="sg-core" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.523037 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-log" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523044 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-log" Mar 10 19:11:39 crc kubenswrapper[4861]: E0310 19:11:39.523058 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="proxy-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523063 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="proxy-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523220 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="sg-core" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523232 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-central-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523239 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="ceilometer-notification-agent" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523252 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-log" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523265 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" containerName="proxy-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.523279 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" containerName="glance-httpd" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.524854 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.527930 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.528064 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.533339 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.546081 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.647443 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.647719 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.647759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7br6\" (UniqueName: \"kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.647866 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.648046 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.648089 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.648335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750176 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7br6\" (UniqueName: \"kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750238 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750259 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750350 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750383 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750411 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.750809 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.752521 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.756117 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.756160 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.758568 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.769438 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.771696 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7br6\" (UniqueName: \"kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6\") pod \"ceilometer-0\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.867424 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:39 crc kubenswrapper[4861]: I0310 19:11:39.965487 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058490 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058784 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058830 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058850 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058889 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsh9k\" (UniqueName: \"kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058909 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.058968 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts\") pod \"256b5814-23a7-4f27-8c86-544ec5290a5d\" (UID: \"256b5814-23a7-4f27-8c86-544ec5290a5d\") " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.059874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs" (OuterVolumeSpecName: "logs") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.060256 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.062954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.068830 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts" (OuterVolumeSpecName: "scripts") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.069033 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k" (OuterVolumeSpecName: "kube-api-access-nsh9k") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "kube-api-access-nsh9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.095479 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c16ced7d-2645-42db-abc8-266267b6de4c","Type":"ContainerDied","Data":"f21bfabba3e1f521f8d6b5e0d369f438b7bcd0afd970a0a2b7d05bc1438f0a38"} Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.095533 4861 scope.go:117] "RemoveContainer" containerID="8724e9f95caf76f2229af962d390f418057ba1ded4a3ff3dce212acea3927f0b" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.095629 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.098446 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.111894 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.150274 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data" (OuterVolumeSpecName: "config-data") pod "256b5814-23a7-4f27-8c86-544ec5290a5d" (UID: "256b5814-23a7-4f27-8c86-544ec5290a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.152205 4861 generic.go:334] "Generic (PLEG): container finished" podID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerID="b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0" exitCode=0 Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.152349 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerDied","Data":"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0"} Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.152466 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"256b5814-23a7-4f27-8c86-544ec5290a5d","Type":"ContainerDied","Data":"85b9d7e9853620343a5daef3f5bf4d32ae84808e9e27f0e886e858e57a7a352d"} Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.164050 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166437 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166476 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166491 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166504 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166514 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/256b5814-23a7-4f27-8c86-544ec5290a5d-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166524 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsh9k\" (UniqueName: \"kubernetes.io/projected/256b5814-23a7-4f27-8c86-544ec5290a5d-kube-api-access-nsh9k\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166533 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.166544 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/256b5814-23a7-4f27-8c86-544ec5290a5d-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.167885 4861 scope.go:117] "RemoveContainer" containerID="eba307c015ebf3b6494ca54b629a1ceb28eb5186989480dc651d2c2ed68bfb9e" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.226041 4861 scope.go:117] "RemoveContainer" containerID="b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.261660 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.267180 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.285600 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.302491 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.323485 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: E0310 19:11:40.324777 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-log" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.324800 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-log" Mar 10 19:11:40 crc kubenswrapper[4861]: E0310 19:11:40.324832 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-httpd" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.324839 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-httpd" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.325020 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-httpd" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.325038 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" containerName="glance-log" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.334945 4861 scope.go:117] "RemoveContainer" containerID="050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.335142 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.339965 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.340157 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6lrgm" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.340272 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.340389 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.343615 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.373745 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.404363 4861 scope.go:117] "RemoveContainer" containerID="b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0" Mar 10 19:11:40 crc kubenswrapper[4861]: E0310 19:11:40.408261 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0\": container with ID starting with b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0 not found: ID does not exist" containerID="b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.408292 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0"} err="failed to get container status \"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0\": rpc error: code = NotFound desc = could not find container \"b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0\": container with ID starting with b77e6c457c419260599ed14561da4047f805348e89359001743a2899ad1e13a0 not found: ID does not exist" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.408311 4861 scope.go:117] "RemoveContainer" containerID="050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2" Mar 10 19:11:40 crc kubenswrapper[4861]: E0310 19:11:40.411916 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2\": container with ID starting with 050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2 not found: ID does not exist" containerID="050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.411958 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2"} err="failed to get container status \"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2\": rpc error: code = NotFound desc = could not find container \"050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2\": container with ID starting with 050c2706b3b070851fe6943bb2b0af35e18d175f88f9f0b0cad25c40580529e2 not found: ID does not exist" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.413821 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.451392 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.456191 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.458736 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.458888 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.464161 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470285 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470386 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470409 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470429 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470465 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbc9q\" (UniqueName: \"kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470483 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.470529 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.494041 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.503539 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572458 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572503 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbc9q\" (UniqueName: \"kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572524 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572548 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572594 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572644 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572769 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572915 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwh9\" (UniqueName: \"kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.572966 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573010 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573119 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573439 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573502 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573701 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573775 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.573825 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.574233 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.574308 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.578264 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.579359 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.587913 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbc9q\" (UniqueName: \"kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.591349 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.593582 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.624631 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.675798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676140 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676259 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wwh9\" (UniqueName: \"kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676381 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676489 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676686 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676828 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.676620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.677021 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.678060 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.680690 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.681123 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.681498 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.682083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.694610 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wwh9\" (UniqueName: \"kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.702107 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.718585 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.793823 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.990413 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256b5814-23a7-4f27-8c86-544ec5290a5d" path="/var/lib/kubelet/pods/256b5814-23a7-4f27-8c86-544ec5290a5d/volumes" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.991754 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2448498-19da-4b93-b300-6425cb5ab4ba" path="/var/lib/kubelet/pods/b2448498-19da-4b93-b300-6425cb5ab4ba/volumes" Mar 10 19:11:40 crc kubenswrapper[4861]: I0310 19:11:40.993518 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16ced7d-2645-42db-abc8-266267b6de4c" path="/var/lib/kubelet/pods/c16ced7d-2645-42db-abc8-266267b6de4c/volumes" Mar 10 19:11:41 crc kubenswrapper[4861]: I0310 19:11:41.169181 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerStarted","Data":"3a064168b2954b8a9390794f973acd66eba9e8baf3320f221d1eaef3b65489c8"} Mar 10 19:11:41 crc kubenswrapper[4861]: I0310 19:11:41.352483 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:11:42 crc kubenswrapper[4861]: I0310 19:11:42.181153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerStarted","Data":"6d64451acb96d511cdd93fc34d515a0f7f8bd43cf97236cb0d372cc462ac4da0"} Mar 10 19:11:42 crc kubenswrapper[4861]: I0310 19:11:42.184692 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerStarted","Data":"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44"} Mar 10 19:11:42 crc kubenswrapper[4861]: I0310 19:11:42.184746 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerStarted","Data":"42a4b209bcffc36e875f5d3604fbcdee8bae055f5c7543873ac0a25a5c12ee9f"} Mar 10 19:11:42 crc kubenswrapper[4861]: I0310 19:11:42.458761 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:11:47 crc kubenswrapper[4861]: I0310 19:11:47.282351 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerStarted","Data":"643322ba32f068134ebcbe3ed48fde9698ce8f32188f5c30981008358a74a46e"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.299232 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerStarted","Data":"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.299891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerStarted","Data":"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.304223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerStarted","Data":"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.307773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m77bt" event={"ID":"e61c59b6-f849-406f-8680-cb83de220b46","Type":"ContainerStarted","Data":"6d987ece4dd051bd98e8361217e86ae2fa65c0960145192eb7173b18775b49a5"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.312310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerStarted","Data":"d1342d2616b07fc02997a34b343522388e6f8f7a7c4ac0c324dc4715f356b783"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.312362 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerStarted","Data":"609135a946635b89bac110d2230969121e284ce1adb277ea577fe084ae4603ae"} Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.376335 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.376316799 podStartE2EDuration="8.376316799s" podCreationTimestamp="2026-03-10 19:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:48.338619452 +0000 UTC m=+1452.102055442" watchObservedRunningTime="2026-03-10 19:11:48.376316799 +0000 UTC m=+1452.139752759" Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.378129 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.37812165 podStartE2EDuration="8.37812165s" podCreationTimestamp="2026-03-10 19:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:11:48.372232416 +0000 UTC m=+1452.135668386" watchObservedRunningTime="2026-03-10 19:11:48.37812165 +0000 UTC m=+1452.141557610" Mar 10 19:11:48 crc kubenswrapper[4861]: I0310 19:11:48.396057 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m77bt" podStartSLOduration=2.269820779 podStartE2EDuration="12.396044007s" podCreationTimestamp="2026-03-10 19:11:36 +0000 UTC" firstStartedPulling="2026-03-10 19:11:37.02379329 +0000 UTC m=+1440.787229250" lastFinishedPulling="2026-03-10 19:11:47.150016528 +0000 UTC m=+1450.913452478" observedRunningTime="2026-03-10 19:11:48.392064827 +0000 UTC m=+1452.155500787" watchObservedRunningTime="2026-03-10 19:11:48.396044007 +0000 UTC m=+1452.159479967" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.719682 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.720542 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.750102 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.767938 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.795189 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.795265 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.836541 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:50 crc kubenswrapper[4861]: I0310 19:11:50.856221 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.349957 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerStarted","Data":"ebf01e98977633c83727ca25d163d9d882866cc94b1f269e06c42c949f899a79"} Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350121 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350428 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350474 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350502 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350597 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-central-agent" containerID="cri-o://6d64451acb96d511cdd93fc34d515a0f7f8bd43cf97236cb0d372cc462ac4da0" gracePeriod=30 Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350619 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="proxy-httpd" containerID="cri-o://ebf01e98977633c83727ca25d163d9d882866cc94b1f269e06c42c949f899a79" gracePeriod=30 Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350649 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="sg-core" containerID="cri-o://d1342d2616b07fc02997a34b343522388e6f8f7a7c4ac0c324dc4715f356b783" gracePeriod=30 Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.350780 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-notification-agent" containerID="cri-o://609135a946635b89bac110d2230969121e284ce1adb277ea577fe084ae4603ae" gracePeriod=30 Mar 10 19:11:51 crc kubenswrapper[4861]: I0310 19:11:51.401496 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9569202089999997 podStartE2EDuration="12.401473638s" podCreationTimestamp="2026-03-10 19:11:39 +0000 UTC" firstStartedPulling="2026-03-10 19:11:40.455599057 +0000 UTC m=+1444.219035017" lastFinishedPulling="2026-03-10 19:11:49.900152496 +0000 UTC m=+1453.663588446" observedRunningTime="2026-03-10 19:11:51.386245935 +0000 UTC m=+1455.149681905" watchObservedRunningTime="2026-03-10 19:11:51.401473638 +0000 UTC m=+1455.164909608" Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.370514 4861 generic.go:334] "Generic (PLEG): container finished" podID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerID="ebf01e98977633c83727ca25d163d9d882866cc94b1f269e06c42c949f899a79" exitCode=0 Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.371146 4861 generic.go:334] "Generic (PLEG): container finished" podID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerID="d1342d2616b07fc02997a34b343522388e6f8f7a7c4ac0c324dc4715f356b783" exitCode=2 Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.371179 4861 generic.go:334] "Generic (PLEG): container finished" podID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerID="609135a946635b89bac110d2230969121e284ce1adb277ea577fe084ae4603ae" exitCode=0 Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.371236 4861 generic.go:334] "Generic (PLEG): container finished" podID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerID="6d64451acb96d511cdd93fc34d515a0f7f8bd43cf97236cb0d372cc462ac4da0" exitCode=0 Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.370575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerDied","Data":"ebf01e98977633c83727ca25d163d9d882866cc94b1f269e06c42c949f899a79"} Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.372451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerDied","Data":"d1342d2616b07fc02997a34b343522388e6f8f7a7c4ac0c324dc4715f356b783"} Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.372481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerDied","Data":"609135a946635b89bac110d2230969121e284ce1adb277ea577fe084ae4603ae"} Mar 10 19:11:52 crc kubenswrapper[4861]: I0310 19:11:52.372503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerDied","Data":"6d64451acb96d511cdd93fc34d515a0f7f8bd43cf97236cb0d372cc462ac4da0"} Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.044475 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.046014 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.102837 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.150699 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.264782 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.264834 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.264854 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7br6\" (UniqueName: \"kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.264914 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.264952 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.265006 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.265103 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd\") pod \"80dd8414-6eac-4a23-b177-8f190c059bf5\" (UID: \"80dd8414-6eac-4a23-b177-8f190c059bf5\") " Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.266073 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.266219 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.274094 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6" (OuterVolumeSpecName: "kube-api-access-k7br6") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "kube-api-access-k7br6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.274390 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts" (OuterVolumeSpecName: "scripts") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.297808 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.353796 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367377 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367410 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dd8414-6eac-4a23-b177-8f190c059bf5-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367425 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367437 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367450 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7br6\" (UniqueName: \"kubernetes.io/projected/80dd8414-6eac-4a23-b177-8f190c059bf5-kube-api-access-k7br6\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367494 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.367632 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data" (OuterVolumeSpecName: "config-data") pod "80dd8414-6eac-4a23-b177-8f190c059bf5" (UID: "80dd8414-6eac-4a23-b177-8f190c059bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.394001 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80dd8414-6eac-4a23-b177-8f190c059bf5","Type":"ContainerDied","Data":"3a064168b2954b8a9390794f973acd66eba9e8baf3320f221d1eaef3b65489c8"} Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.394064 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.394072 4861 scope.go:117] "RemoveContainer" containerID="ebf01e98977633c83727ca25d163d9d882866cc94b1f269e06c42c949f899a79" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.434372 4861 scope.go:117] "RemoveContainer" containerID="d1342d2616b07fc02997a34b343522388e6f8f7a7c4ac0c324dc4715f356b783" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.442075 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.462388 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.469575 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dd8414-6eac-4a23-b177-8f190c059bf5-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.471200 4861 scope.go:117] "RemoveContainer" containerID="609135a946635b89bac110d2230969121e284ce1adb277ea577fe084ae4603ae" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.484217 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:54 crc kubenswrapper[4861]: E0310 19:11:54.484696 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-central-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.484736 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-central-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: E0310 19:11:54.484765 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="proxy-httpd" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.484774 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="proxy-httpd" Mar 10 19:11:54 crc kubenswrapper[4861]: E0310 19:11:54.484786 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="sg-core" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.484795 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="sg-core" Mar 10 19:11:54 crc kubenswrapper[4861]: E0310 19:11:54.484815 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-notification-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.484824 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-notification-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.485055 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-notification-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.485076 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="sg-core" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.485099 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="ceilometer-central-agent" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.485109 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" containerName="proxy-httpd" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.496422 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.496532 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.499301 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.499763 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.524389 4861 scope.go:117] "RemoveContainer" containerID="6d64451acb96d511cdd93fc34d515a0f7f8bd43cf97236cb0d372cc462ac4da0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.571591 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.571969 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.572000 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.572037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.572113 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.572132 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.572159 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjvw\" (UniqueName: \"kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674094 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674401 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674496 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674600 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vjvw\" (UniqueName: \"kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674804 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.674924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.675019 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.675037 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.675234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.679273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.679371 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.679596 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.680586 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.694341 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vjvw\" (UniqueName: \"kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw\") pod \"ceilometer-0\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.818637 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:11:54 crc kubenswrapper[4861]: I0310 19:11:54.972370 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dd8414-6eac-4a23-b177-8f190c059bf5" path="/var/lib/kubelet/pods/80dd8414-6eac-4a23-b177-8f190c059bf5/volumes" Mar 10 19:11:55 crc kubenswrapper[4861]: I0310 19:11:55.337687 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:55 crc kubenswrapper[4861]: W0310 19:11:55.341468 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c54be36_e7cd_45db_bfe4_2c7e8d2d0231.slice/crio-232d0e8cfd07e853e55e419bc0795da468cb7d80bbb031b48ce8ceeba1fb2127 WatchSource:0}: Error finding container 232d0e8cfd07e853e55e419bc0795da468cb7d80bbb031b48ce8ceeba1fb2127: Status 404 returned error can't find the container with id 232d0e8cfd07e853e55e419bc0795da468cb7d80bbb031b48ce8ceeba1fb2127 Mar 10 19:11:55 crc kubenswrapper[4861]: I0310 19:11:55.403153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerStarted","Data":"232d0e8cfd07e853e55e419bc0795da468cb7d80bbb031b48ce8ceeba1fb2127"} Mar 10 19:11:56 crc kubenswrapper[4861]: I0310 19:11:56.031541 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 10 19:11:56 crc kubenswrapper[4861]: I0310 19:11:56.414160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerStarted","Data":"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a"} Mar 10 19:11:56 crc kubenswrapper[4861]: I0310 19:11:56.603969 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:11:57 crc kubenswrapper[4861]: I0310 19:11:57.426889 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerStarted","Data":"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1"} Mar 10 19:11:58 crc kubenswrapper[4861]: I0310 19:11:58.447972 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerStarted","Data":"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33"} Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.141971 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552832-b64zr"] Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.143525 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.145974 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.146937 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.147265 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.163428 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552832-b64zr"] Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.277765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg5f\" (UniqueName: \"kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f\") pod \"auto-csr-approver-29552832-b64zr\" (UID: \"85716ed5-50f3-4f75-9d6c-236dcf24e46d\") " pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.379813 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwg5f\" (UniqueName: \"kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f\") pod \"auto-csr-approver-29552832-b64zr\" (UID: \"85716ed5-50f3-4f75-9d6c-236dcf24e46d\") " pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.416576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwg5f\" (UniqueName: \"kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f\") pod \"auto-csr-approver-29552832-b64zr\" (UID: \"85716ed5-50f3-4f75-9d6c-236dcf24e46d\") " pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.473952 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.483606 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerStarted","Data":"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62"} Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.484514 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-central-agent" containerID="cri-o://a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a" gracePeriod=30 Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.484636 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.484663 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="proxy-httpd" containerID="cri-o://d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62" gracePeriod=30 Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.484776 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="sg-core" containerID="cri-o://a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33" gracePeriod=30 Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.484854 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-notification-agent" containerID="cri-o://6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1" gracePeriod=30 Mar 10 19:12:00 crc kubenswrapper[4861]: I0310 19:12:00.536669 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.456087821 podStartE2EDuration="6.536612751s" podCreationTimestamp="2026-03-10 19:11:54 +0000 UTC" firstStartedPulling="2026-03-10 19:11:55.343613943 +0000 UTC m=+1459.107049893" lastFinishedPulling="2026-03-10 19:11:59.424138863 +0000 UTC m=+1463.187574823" observedRunningTime="2026-03-10 19:12:00.524423733 +0000 UTC m=+1464.287859733" watchObservedRunningTime="2026-03-10 19:12:00.536612751 +0000 UTC m=+1464.300048731" Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.076363 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552832-b64zr"] Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.498129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552832-b64zr" event={"ID":"85716ed5-50f3-4f75-9d6c-236dcf24e46d","Type":"ContainerStarted","Data":"816648912eb5657600dedcb950fde057774657bcdae1afc0cadfc57c0bdd50f1"} Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.501946 4861 generic.go:334] "Generic (PLEG): container finished" podID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerID="d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62" exitCode=0 Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.501982 4861 generic.go:334] "Generic (PLEG): container finished" podID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerID="a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33" exitCode=2 Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.501993 4861 generic.go:334] "Generic (PLEG): container finished" podID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerID="6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1" exitCode=0 Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.502015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerDied","Data":"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62"} Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.502045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerDied","Data":"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33"} Mar 10 19:12:01 crc kubenswrapper[4861]: I0310 19:12:01.502060 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerDied","Data":"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1"} Mar 10 19:12:02 crc kubenswrapper[4861]: I0310 19:12:02.515290 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552832-b64zr" event={"ID":"85716ed5-50f3-4f75-9d6c-236dcf24e46d","Type":"ContainerStarted","Data":"28badc523489d1234ded439c2f68701ee34d03e427f5b474008ca60684fe6e7c"} Mar 10 19:12:02 crc kubenswrapper[4861]: I0310 19:12:02.531619 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552832-b64zr" podStartSLOduration=1.652896124 podStartE2EDuration="2.531607008s" podCreationTimestamp="2026-03-10 19:12:00 +0000 UTC" firstStartedPulling="2026-03-10 19:12:01.086666963 +0000 UTC m=+1464.850102923" lastFinishedPulling="2026-03-10 19:12:01.965377837 +0000 UTC m=+1465.728813807" observedRunningTime="2026-03-10 19:12:02.528688797 +0000 UTC m=+1466.292124757" watchObservedRunningTime="2026-03-10 19:12:02.531607008 +0000 UTC m=+1466.295042968" Mar 10 19:12:03 crc kubenswrapper[4861]: I0310 19:12:03.527054 4861 generic.go:334] "Generic (PLEG): container finished" podID="85716ed5-50f3-4f75-9d6c-236dcf24e46d" containerID="28badc523489d1234ded439c2f68701ee34d03e427f5b474008ca60684fe6e7c" exitCode=0 Mar 10 19:12:03 crc kubenswrapper[4861]: I0310 19:12:03.527144 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552832-b64zr" event={"ID":"85716ed5-50f3-4f75-9d6c-236dcf24e46d","Type":"ContainerDied","Data":"28badc523489d1234ded439c2f68701ee34d03e427f5b474008ca60684fe6e7c"} Mar 10 19:12:03 crc kubenswrapper[4861]: I0310 19:12:03.529119 4861 generic.go:334] "Generic (PLEG): container finished" podID="e61c59b6-f849-406f-8680-cb83de220b46" containerID="6d987ece4dd051bd98e8361217e86ae2fa65c0960145192eb7173b18775b49a5" exitCode=0 Mar 10 19:12:03 crc kubenswrapper[4861]: I0310 19:12:03.529143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m77bt" event={"ID":"e61c59b6-f849-406f-8680-cb83de220b46","Type":"ContainerDied","Data":"6d987ece4dd051bd98e8361217e86ae2fa65c0960145192eb7173b18775b49a5"} Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.175059 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.253979 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254217 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254351 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254379 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vjvw\" (UniqueName: \"kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.254407 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data\") pod \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\" (UID: \"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231\") " Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.257058 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.265988 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.267350 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts" (OuterVolumeSpecName: "scripts") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.285852 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw" (OuterVolumeSpecName: "kube-api-access-4vjvw") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "kube-api-access-4vjvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.310887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.356536 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.356569 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.356579 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.356590 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vjvw\" (UniqueName: \"kubernetes.io/projected/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-kube-api-access-4vjvw\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.356599 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.365235 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.404032 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data" (OuterVolumeSpecName: "config-data") pod "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" (UID: "9c54be36-e7cd-45db-bfe4-2c7e8d2d0231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.459368 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.459634 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.542726 4861 generic.go:334] "Generic (PLEG): container finished" podID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerID="a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a" exitCode=0 Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.542918 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.545772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerDied","Data":"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a"} Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.545822 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c54be36-e7cd-45db-bfe4-2c7e8d2d0231","Type":"ContainerDied","Data":"232d0e8cfd07e853e55e419bc0795da468cb7d80bbb031b48ce8ceeba1fb2127"} Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.545840 4861 scope.go:117] "RemoveContainer" containerID="d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.581672 4861 scope.go:117] "RemoveContainer" containerID="a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.593431 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.609109 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.627380 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.627762 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="proxy-httpd" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.627775 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="proxy-httpd" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.627783 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="sg-core" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.627789 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="sg-core" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.627819 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-notification-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.627825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-notification-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.627837 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-central-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.627842 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-central-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.628018 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-notification-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.628041 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="proxy-httpd" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.628049 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="ceilometer-central-agent" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.628060 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" containerName="sg-core" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.629618 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.635308 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.635534 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.654428 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.656198 4861 scope.go:117] "RemoveContainer" containerID="6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.681404 4861 scope.go:117] "RemoveContainer" containerID="a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.699630 4861 scope.go:117] "RemoveContainer" containerID="d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.704178 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62\": container with ID starting with d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62 not found: ID does not exist" containerID="d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704220 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62"} err="failed to get container status \"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62\": rpc error: code = NotFound desc = could not find container \"d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62\": container with ID starting with d05615869bd4c85c2b97e1926a785004eff9f1b42eb13477d409be83b75e2a62 not found: ID does not exist" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704245 4861 scope.go:117] "RemoveContainer" containerID="a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.704597 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33\": container with ID starting with a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33 not found: ID does not exist" containerID="a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704624 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33"} err="failed to get container status \"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33\": rpc error: code = NotFound desc = could not find container \"a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33\": container with ID starting with a9b455aaa45027d333d20c288d6fb6baba7de8c51c1924d2ad9ebced61795a33 not found: ID does not exist" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704636 4861 scope.go:117] "RemoveContainer" containerID="6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.704921 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1\": container with ID starting with 6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1 not found: ID does not exist" containerID="6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704965 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1"} err="failed to get container status \"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1\": rpc error: code = NotFound desc = could not find container \"6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1\": container with ID starting with 6a65abd4c73016d3a025961f61465b9db2d9ad0f6113aa9f10f82f8eb28851f1 not found: ID does not exist" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.704995 4861 scope.go:117] "RemoveContainer" containerID="a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a" Mar 10 19:12:04 crc kubenswrapper[4861]: E0310 19:12:04.705324 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a\": container with ID starting with a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a not found: ID does not exist" containerID="a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.705356 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a"} err="failed to get container status \"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a\": rpc error: code = NotFound desc = could not find container \"a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a\": container with ID starting with a7ae3230d44e1a0825133308a15b86125b4279b8f180a8fd5545d2614b897d2a not found: ID does not exist" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766442 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkds8\" (UniqueName: \"kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766485 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766558 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766582 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766659 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.766799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.867896 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.867941 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.867960 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.868000 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkds8\" (UniqueName: \"kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.868019 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.868079 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.868104 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.869088 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.869106 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.883944 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.884399 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.884545 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.888417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkds8\" (UniqueName: \"kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.889787 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.948380 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:04 crc kubenswrapper[4861]: I0310 19:12:04.976435 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c54be36-e7cd-45db-bfe4-2c7e8d2d0231" path="/var/lib/kubelet/pods/9c54be36-e7cd-45db-bfe4-2c7e8d2d0231/volumes" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.053207 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.059875 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.172463 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwg5f\" (UniqueName: \"kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f\") pod \"85716ed5-50f3-4f75-9d6c-236dcf24e46d\" (UID: \"85716ed5-50f3-4f75-9d6c-236dcf24e46d\") " Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.172605 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qpvd\" (UniqueName: \"kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd\") pod \"e61c59b6-f849-406f-8680-cb83de220b46\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.173180 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts\") pod \"e61c59b6-f849-406f-8680-cb83de220b46\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.173232 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data\") pod \"e61c59b6-f849-406f-8680-cb83de220b46\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.173275 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle\") pod \"e61c59b6-f849-406f-8680-cb83de220b46\" (UID: \"e61c59b6-f849-406f-8680-cb83de220b46\") " Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.177740 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f" (OuterVolumeSpecName: "kube-api-access-rwg5f") pod "85716ed5-50f3-4f75-9d6c-236dcf24e46d" (UID: "85716ed5-50f3-4f75-9d6c-236dcf24e46d"). InnerVolumeSpecName "kube-api-access-rwg5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.178314 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd" (OuterVolumeSpecName: "kube-api-access-6qpvd") pod "e61c59b6-f849-406f-8680-cb83de220b46" (UID: "e61c59b6-f849-406f-8680-cb83de220b46"). InnerVolumeSpecName "kube-api-access-6qpvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.178975 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts" (OuterVolumeSpecName: "scripts") pod "e61c59b6-f849-406f-8680-cb83de220b46" (UID: "e61c59b6-f849-406f-8680-cb83de220b46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.202923 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data" (OuterVolumeSpecName: "config-data") pod "e61c59b6-f849-406f-8680-cb83de220b46" (UID: "e61c59b6-f849-406f-8680-cb83de220b46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.202971 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e61c59b6-f849-406f-8680-cb83de220b46" (UID: "e61c59b6-f849-406f-8680-cb83de220b46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.275376 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwg5f\" (UniqueName: \"kubernetes.io/projected/85716ed5-50f3-4f75-9d6c-236dcf24e46d-kube-api-access-rwg5f\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.275418 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qpvd\" (UniqueName: \"kubernetes.io/projected/e61c59b6-f849-406f-8680-cb83de220b46-kube-api-access-6qpvd\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.275430 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.275442 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.275455 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61c59b6-f849-406f-8680-cb83de220b46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.443469 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:05 crc kubenswrapper[4861]: W0310 19:12:05.451036 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a6522b_3ccb_4d9b_b48d_dce8c34f3eab.slice/crio-a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09 WatchSource:0}: Error finding container a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09: Status 404 returned error can't find the container with id a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09 Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.559599 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552832-b64zr" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.559775 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552832-b64zr" event={"ID":"85716ed5-50f3-4f75-9d6c-236dcf24e46d","Type":"ContainerDied","Data":"816648912eb5657600dedcb950fde057774657bcdae1afc0cadfc57c0bdd50f1"} Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.559821 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816648912eb5657600dedcb950fde057774657bcdae1afc0cadfc57c0bdd50f1" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.562820 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m77bt" event={"ID":"e61c59b6-f849-406f-8680-cb83de220b46","Type":"ContainerDied","Data":"4603eaff9c36db9933e979814f624e402b22beb024bdc63c54e614558fe6a747"} Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.562870 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4603eaff9c36db9933e979814f624e402b22beb024bdc63c54e614558fe6a747" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.562924 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m77bt" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.565142 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerStarted","Data":"a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09"} Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.631174 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552826-tw76q"] Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.643904 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552826-tw76q"] Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.680313 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:12:05 crc kubenswrapper[4861]: E0310 19:12:05.680647 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85716ed5-50f3-4f75-9d6c-236dcf24e46d" containerName="oc" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.680664 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="85716ed5-50f3-4f75-9d6c-236dcf24e46d" containerName="oc" Mar 10 19:12:05 crc kubenswrapper[4861]: E0310 19:12:05.680679 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61c59b6-f849-406f-8680-cb83de220b46" containerName="nova-cell0-conductor-db-sync" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.680686 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61c59b6-f849-406f-8680-cb83de220b46" containerName="nova-cell0-conductor-db-sync" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.680898 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="85716ed5-50f3-4f75-9d6c-236dcf24e46d" containerName="oc" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.680921 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61c59b6-f849-406f-8680-cb83de220b46" containerName="nova-cell0-conductor-db-sync" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.681483 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.684189 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bgl7k" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.684353 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.716915 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.783689 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.784295 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.784330 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrfj\" (UniqueName: \"kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.885907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.886275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrfj\" (UniqueName: \"kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.886491 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.891513 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.893200 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.910908 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrfj\" (UniqueName: \"kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj\") pod \"nova-cell0-conductor-0\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:05 crc kubenswrapper[4861]: I0310 19:12:05.997404 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:06 crc kubenswrapper[4861]: W0310 19:12:06.511009 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f3ba611_9f83_40a6_9282_d4e3b0ccfbfd.slice/crio-b6a7d3b9252abad782b9dfca604f83f73a1b5ecf3b150762169a8607e013cca5 WatchSource:0}: Error finding container b6a7d3b9252abad782b9dfca604f83f73a1b5ecf3b150762169a8607e013cca5: Status 404 returned error can't find the container with id b6a7d3b9252abad782b9dfca604f83f73a1b5ecf3b150762169a8607e013cca5 Mar 10 19:12:06 crc kubenswrapper[4861]: I0310 19:12:06.516159 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:12:06 crc kubenswrapper[4861]: I0310 19:12:06.578268 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerStarted","Data":"07667775b38ed6a21232726c2f1719e926731eb5a7eee20c7a79075101659dd2"} Mar 10 19:12:06 crc kubenswrapper[4861]: I0310 19:12:06.579505 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd","Type":"ContainerStarted","Data":"b6a7d3b9252abad782b9dfca604f83f73a1b5ecf3b150762169a8607e013cca5"} Mar 10 19:12:06 crc kubenswrapper[4861]: I0310 19:12:06.968374 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68aea368-20a3-44da-9d77-eafce380801e" path="/var/lib/kubelet/pods/68aea368-20a3-44da-9d77-eafce380801e/volumes" Mar 10 19:12:07 crc kubenswrapper[4861]: I0310 19:12:07.588966 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerStarted","Data":"bb6f71b5360bba8bdcea9ee90a666f5c40c9a425f05d5c8ad3c7dd33a254a044"} Mar 10 19:12:07 crc kubenswrapper[4861]: I0310 19:12:07.589264 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerStarted","Data":"2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa"} Mar 10 19:12:07 crc kubenswrapper[4861]: I0310 19:12:07.590122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd","Type":"ContainerStarted","Data":"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da"} Mar 10 19:12:07 crc kubenswrapper[4861]: I0310 19:12:07.590222 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:09 crc kubenswrapper[4861]: I0310 19:12:09.610700 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerStarted","Data":"aec68c03619f69a3eb371ec522aa257bf49ad74916348fcb5bdc5a8ffc86c627"} Mar 10 19:12:09 crc kubenswrapper[4861]: I0310 19:12:09.611377 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:12:09 crc kubenswrapper[4861]: I0310 19:12:09.643952 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.643926991 podStartE2EDuration="4.643926991s" podCreationTimestamp="2026-03-10 19:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:07.61492062 +0000 UTC m=+1471.378356590" watchObservedRunningTime="2026-03-10 19:12:09.643926991 +0000 UTC m=+1473.407362981" Mar 10 19:12:09 crc kubenswrapper[4861]: I0310 19:12:09.650809 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.033520473 podStartE2EDuration="5.650789272s" podCreationTimestamp="2026-03-10 19:12:04 +0000 UTC" firstStartedPulling="2026-03-10 19:12:05.455540325 +0000 UTC m=+1469.218976295" lastFinishedPulling="2026-03-10 19:12:09.072809094 +0000 UTC m=+1472.836245094" observedRunningTime="2026-03-10 19:12:09.638301075 +0000 UTC m=+1473.401737065" watchObservedRunningTime="2026-03-10 19:12:09.650789272 +0000 UTC m=+1473.414225282" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.028703 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.561022 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qnt8d"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.562868 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.567218 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.568133 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.592074 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qnt8d"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.683840 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.683923 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.683947 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspw8\" (UniqueName: \"kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.683977 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.772055 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.774170 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.786050 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.787361 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.787469 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.787497 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspw8\" (UniqueName: \"kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.787557 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.810376 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.821455 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.822828 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.823304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.830006 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.831515 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.834273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspw8\" (UniqueName: \"kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8\") pod \"nova-cell0-cell-mapping-qnt8d\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.835390 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.862426 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890589 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhc8p\" (UniqueName: \"kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890643 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890748 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqpp\" (UniqueName: \"kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890800 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890863 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.890882 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.892496 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.969985 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.974475 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997539 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997581 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997645 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997727 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhc8p\" (UniqueName: \"kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:16 crc kubenswrapper[4861]: I0310 19:12:16.997818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gqpp\" (UniqueName: \"kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:16.999238 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.020957 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.029205 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.033449 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.033661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.036587 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhc8p\" (UniqueName: \"kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p\") pod \"nova-scheduler-0\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.039846 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.042988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gqpp\" (UniqueName: \"kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp\") pod \"nova-api-0\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.044473 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.044622 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.051763 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.066047 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.072913 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101176 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101232 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101257 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101299 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbq4\" (UniqueName: \"kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101350 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101406 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.101464 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkh2\" (UniqueName: \"kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.118241 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.151292 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.152857 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.273881 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.293697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.294860 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.294992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbq4\" (UniqueName: \"kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.295152 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.295244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.295734 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.309666 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.309935 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.310047 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.310179 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbh2\" (UniqueName: \"kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.311471 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkh2\" (UniqueName: \"kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.311802 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.311924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.309537 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.314204 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.320025 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.323519 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.343919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.345761 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkh2\" (UniqueName: \"kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2\") pod \"nova-metadata-0\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.348286 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.363148 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbq4\" (UniqueName: \"kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.403679 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.415168 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419679 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419846 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419893 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419931 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.419972 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbh2\" (UniqueName: \"kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.420792 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.421278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.421794 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.422258 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.423806 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.437982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbh2\" (UniqueName: \"kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2\") pod \"dnsmasq-dns-7bd5679c8c-84fjp\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.544921 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.630257 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qnt8d"] Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.684201 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qnt8d" event={"ID":"5d48de82-400d-41d1-a054-f451486e0ff5","Type":"ContainerStarted","Data":"2c8165a2988a3adbb3ca6e36e6e70fb2c90a82e875e2c232323e1e99c09e65f8"} Mar 10 19:12:17 crc kubenswrapper[4861]: I0310 19:12:17.835206 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:17 crc kubenswrapper[4861]: W0310 19:12:17.863915 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb44343a9_b5bd_4f04_b33f_73abd4d4a553.slice/crio-90b5d2ea45869ea075f0e158dbf492784c05dfdc2119201e9c7c516a8df31e1d WatchSource:0}: Error finding container 90b5d2ea45869ea075f0e158dbf492784c05dfdc2119201e9c7c516a8df31e1d: Status 404 returned error can't find the container with id 90b5d2ea45869ea075f0e158dbf492784c05dfdc2119201e9c7c516a8df31e1d Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.004936 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:18 crc kubenswrapper[4861]: W0310 19:12:18.029086 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod366c866d_7c07_4050_a22f_ddc4421c0447.slice/crio-5763971c7d39d52b22d6756d5e9439c8519572edf25e55666af184bfc5ab1ce7 WatchSource:0}: Error finding container 5763971c7d39d52b22d6756d5e9439c8519572edf25e55666af184bfc5ab1ce7: Status 404 returned error can't find the container with id 5763971c7d39d52b22d6756d5e9439c8519572edf25e55666af184bfc5ab1ce7 Mar 10 19:12:18 crc kubenswrapper[4861]: W0310 19:12:18.030818 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb29d5d26_c7f5_4556_9535_7743f991423a.slice/crio-936883600dfcbdf66251f9c74260a955d3d82ae6b1f3d1ebdcb6e1d25bb9c2d1 WatchSource:0}: Error finding container 936883600dfcbdf66251f9c74260a955d3d82ae6b1f3d1ebdcb6e1d25bb9c2d1: Status 404 returned error can't find the container with id 936883600dfcbdf66251f9c74260a955d3d82ae6b1f3d1ebdcb6e1d25bb9c2d1 Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.042051 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.067065 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdqvp"] Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.072328 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.078835 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdqvp"] Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.080088 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.080351 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.088086 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.145013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9fh\" (UniqueName: \"kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.145268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.145362 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.145484 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.169626 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.247124 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.247615 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.248290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.248658 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9fh\" (UniqueName: \"kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.250886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.251489 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.251566 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.265292 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9fh\" (UniqueName: \"kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh\") pod \"nova-cell1-conductor-db-sync-cdqvp\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.447373 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.715524 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"366c866d-7c07-4050-a22f-ddc4421c0447","Type":"ContainerStarted","Data":"5763971c7d39d52b22d6756d5e9439c8519572edf25e55666af184bfc5ab1ce7"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.717223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerStarted","Data":"27bd9c1101c9b8e0f92626d5ee42b0587e720daa66664aacafe391524dd3aef6"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.719267 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qnt8d" event={"ID":"5d48de82-400d-41d1-a054-f451486e0ff5","Type":"ContainerStarted","Data":"2ec7ad64a891a24b56291d63645fd2ca313e41e4fb9cb01ef30f750e11fe9c88"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.724828 4861 generic.go:334] "Generic (PLEG): container finished" podID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerID="dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911" exitCode=0 Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.724875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" event={"ID":"f078c086-e6d2-4bfe-9767-8b60c59f9c5f","Type":"ContainerDied","Data":"dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.724893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" event={"ID":"f078c086-e6d2-4bfe-9767-8b60c59f9c5f","Type":"ContainerStarted","Data":"2e243bf9a76d5d8236aef644cf668cd3cff4e2f801d8cc8f16861181dfbae000"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.736453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b29d5d26-c7f5-4556-9535-7743f991423a","Type":"ContainerStarted","Data":"936883600dfcbdf66251f9c74260a955d3d82ae6b1f3d1ebdcb6e1d25bb9c2d1"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.741347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerStarted","Data":"90b5d2ea45869ea075f0e158dbf492784c05dfdc2119201e9c7c516a8df31e1d"} Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.747485 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qnt8d" podStartSLOduration=2.747468237 podStartE2EDuration="2.747468237s" podCreationTimestamp="2026-03-10 19:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:18.73284896 +0000 UTC m=+1482.496284920" watchObservedRunningTime="2026-03-10 19:12:18.747468237 +0000 UTC m=+1482.510904197" Mar 10 19:12:18 crc kubenswrapper[4861]: I0310 19:12:18.894401 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdqvp"] Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.765156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" event={"ID":"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea","Type":"ContainerStarted","Data":"83605c45bb5cb59c7e5876bd4f566208ed4c6a08abfc7535626e8b49a51feb43"} Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.765212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" event={"ID":"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea","Type":"ContainerStarted","Data":"3f5abbe4b49540fcdbead0d0d1c28dca48b23bbc655d98d494d7ccd0f95c8732"} Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.768799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" event={"ID":"f078c086-e6d2-4bfe-9767-8b60c59f9c5f","Type":"ContainerStarted","Data":"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa"} Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.768945 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.783694 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" podStartSLOduration=2.783677547 podStartE2EDuration="2.783677547s" podCreationTimestamp="2026-03-10 19:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:19.78275116 +0000 UTC m=+1483.546187150" watchObservedRunningTime="2026-03-10 19:12:19.783677547 +0000 UTC m=+1483.547113507" Mar 10 19:12:19 crc kubenswrapper[4861]: I0310 19:12:19.804463 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" podStartSLOduration=2.804445233 podStartE2EDuration="2.804445233s" podCreationTimestamp="2026-03-10 19:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:19.801359157 +0000 UTC m=+1483.564795147" watchObservedRunningTime="2026-03-10 19:12:19.804445233 +0000 UTC m=+1483.567881203" Mar 10 19:12:20 crc kubenswrapper[4861]: I0310 19:12:20.550988 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:20 crc kubenswrapper[4861]: I0310 19:12:20.557675 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.797032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerStarted","Data":"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566"} Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.799903 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"366c866d-7c07-4050-a22f-ddc4421c0447","Type":"ContainerStarted","Data":"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3"} Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.800036 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="366c866d-7c07-4050-a22f-ddc4421c0447" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3" gracePeriod=30 Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.813955 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerStarted","Data":"6979292a39dc86c248bf638a7d3664b7bd71126ac8ecdc4908c721728467e341"} Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.815353 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b29d5d26-c7f5-4556-9535-7743f991423a","Type":"ContainerStarted","Data":"8ed9540fb7fd8e9e0daaad510061ddbf38d590a7b63a7e35e847f4b947d7b36c"} Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.839099 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.465928045 podStartE2EDuration="5.839083572s" podCreationTimestamp="2026-03-10 19:12:16 +0000 UTC" firstStartedPulling="2026-03-10 19:12:18.047596662 +0000 UTC m=+1481.811032622" lastFinishedPulling="2026-03-10 19:12:21.420752149 +0000 UTC m=+1485.184188149" observedRunningTime="2026-03-10 19:12:21.81669872 +0000 UTC m=+1485.580134680" watchObservedRunningTime="2026-03-10 19:12:21.839083572 +0000 UTC m=+1485.602519522" Mar 10 19:12:21 crc kubenswrapper[4861]: I0310 19:12:21.850214 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.47078945 podStartE2EDuration="5.85019414s" podCreationTimestamp="2026-03-10 19:12:16 +0000 UTC" firstStartedPulling="2026-03-10 19:12:18.047064628 +0000 UTC m=+1481.810500588" lastFinishedPulling="2026-03-10 19:12:21.426469318 +0000 UTC m=+1485.189905278" observedRunningTime="2026-03-10 19:12:21.832602832 +0000 UTC m=+1485.596038802" watchObservedRunningTime="2026-03-10 19:12:21.85019414 +0000 UTC m=+1485.613630100" Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.274991 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.404560 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.827799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerStarted","Data":"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555"} Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.830198 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerStarted","Data":"2ad3ae152412eaddade63a4a108215a4334cfda857d8e8991a080ee3e511cb5b"} Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.830427 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-metadata" containerID="cri-o://2ad3ae152412eaddade63a4a108215a4334cfda857d8e8991a080ee3e511cb5b" gracePeriod=30 Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.830403 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-log" containerID="cri-o://6979292a39dc86c248bf638a7d3664b7bd71126ac8ecdc4908c721728467e341" gracePeriod=30 Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.870037 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.323680515 podStartE2EDuration="6.870009124s" podCreationTimestamp="2026-03-10 19:12:16 +0000 UTC" firstStartedPulling="2026-03-10 19:12:17.873637339 +0000 UTC m=+1481.637073299" lastFinishedPulling="2026-03-10 19:12:21.419965948 +0000 UTC m=+1485.183401908" observedRunningTime="2026-03-10 19:12:22.85185324 +0000 UTC m=+1486.615289310" watchObservedRunningTime="2026-03-10 19:12:22.870009124 +0000 UTC m=+1486.633445104" Mar 10 19:12:22 crc kubenswrapper[4861]: I0310 19:12:22.881612 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.523879997 podStartE2EDuration="6.881591076s" podCreationTimestamp="2026-03-10 19:12:16 +0000 UTC" firstStartedPulling="2026-03-10 19:12:18.063057891 +0000 UTC m=+1481.826493851" lastFinishedPulling="2026-03-10 19:12:21.42076897 +0000 UTC m=+1485.184204930" observedRunningTime="2026-03-10 19:12:22.875551728 +0000 UTC m=+1486.638987748" watchObservedRunningTime="2026-03-10 19:12:22.881591076 +0000 UTC m=+1486.645027046" Mar 10 19:12:23 crc kubenswrapper[4861]: I0310 19:12:23.847858 4861 generic.go:334] "Generic (PLEG): container finished" podID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerID="2ad3ae152412eaddade63a4a108215a4334cfda857d8e8991a080ee3e511cb5b" exitCode=0 Mar 10 19:12:23 crc kubenswrapper[4861]: I0310 19:12:23.848240 4861 generic.go:334] "Generic (PLEG): container finished" podID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerID="6979292a39dc86c248bf638a7d3664b7bd71126ac8ecdc4908c721728467e341" exitCode=143 Mar 10 19:12:23 crc kubenswrapper[4861]: I0310 19:12:23.848744 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerDied","Data":"2ad3ae152412eaddade63a4a108215a4334cfda857d8e8991a080ee3e511cb5b"} Mar 10 19:12:23 crc kubenswrapper[4861]: I0310 19:12:23.848800 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerDied","Data":"6979292a39dc86c248bf638a7d3664b7bd71126ac8ecdc4908c721728467e341"} Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.121734 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.170388 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data\") pod \"8e3cba03-6473-4f76-bc3c-64ed835fecab\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.170472 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jkh2\" (UniqueName: \"kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2\") pod \"8e3cba03-6473-4f76-bc3c-64ed835fecab\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.170527 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle\") pod \"8e3cba03-6473-4f76-bc3c-64ed835fecab\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.170610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs\") pod \"8e3cba03-6473-4f76-bc3c-64ed835fecab\" (UID: \"8e3cba03-6473-4f76-bc3c-64ed835fecab\") " Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.171776 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs" (OuterVolumeSpecName: "logs") pod "8e3cba03-6473-4f76-bc3c-64ed835fecab" (UID: "8e3cba03-6473-4f76-bc3c-64ed835fecab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.189773 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2" (OuterVolumeSpecName: "kube-api-access-8jkh2") pod "8e3cba03-6473-4f76-bc3c-64ed835fecab" (UID: "8e3cba03-6473-4f76-bc3c-64ed835fecab"). InnerVolumeSpecName "kube-api-access-8jkh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.199520 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data" (OuterVolumeSpecName: "config-data") pod "8e3cba03-6473-4f76-bc3c-64ed835fecab" (UID: "8e3cba03-6473-4f76-bc3c-64ed835fecab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.205819 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e3cba03-6473-4f76-bc3c-64ed835fecab" (UID: "8e3cba03-6473-4f76-bc3c-64ed835fecab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.272897 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.272929 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jkh2\" (UniqueName: \"kubernetes.io/projected/8e3cba03-6473-4f76-bc3c-64ed835fecab-kube-api-access-8jkh2\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.272941 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e3cba03-6473-4f76-bc3c-64ed835fecab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.272952 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e3cba03-6473-4f76-bc3c-64ed835fecab-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.859016 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.859029 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8e3cba03-6473-4f76-bc3c-64ed835fecab","Type":"ContainerDied","Data":"27bd9c1101c9b8e0f92626d5ee42b0587e720daa66664aacafe391524dd3aef6"} Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.859802 4861 scope.go:117] "RemoveContainer" containerID="2ad3ae152412eaddade63a4a108215a4334cfda857d8e8991a080ee3e511cb5b" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.863207 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d48de82-400d-41d1-a054-f451486e0ff5" containerID="2ec7ad64a891a24b56291d63645fd2ca313e41e4fb9cb01ef30f750e11fe9c88" exitCode=0 Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.863255 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qnt8d" event={"ID":"5d48de82-400d-41d1-a054-f451486e0ff5","Type":"ContainerDied","Data":"2ec7ad64a891a24b56291d63645fd2ca313e41e4fb9cb01ef30f750e11fe9c88"} Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.902723 4861 scope.go:117] "RemoveContainer" containerID="6979292a39dc86c248bf638a7d3664b7bd71126ac8ecdc4908c721728467e341" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.918895 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.929895 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.955110 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:24 crc kubenswrapper[4861]: E0310 19:12:24.955831 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-log" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.955860 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-log" Mar 10 19:12:24 crc kubenswrapper[4861]: E0310 19:12:24.955897 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-metadata" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.955909 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-metadata" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.956223 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-metadata" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.956284 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" containerName="nova-metadata-log" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.958015 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.960338 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.961112 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.975270 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3cba03-6473-4f76-bc3c-64ed835fecab" path="/var/lib/kubelet/pods/8e3cba03-6473-4f76-bc3c-64ed835fecab/volumes" Mar 10 19:12:24 crc kubenswrapper[4861]: I0310 19:12:24.976110 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.088048 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.088195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2h2b\" (UniqueName: \"kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.088244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.088541 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.088585 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190118 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190157 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190220 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190247 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2h2b\" (UniqueName: \"kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190268 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.190698 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.196237 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.196542 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.197310 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.219591 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2h2b\" (UniqueName: \"kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b\") pod \"nova-metadata-0\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.317145 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.802537 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:25 crc kubenswrapper[4861]: I0310 19:12:25.879460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerStarted","Data":"4c462a410f7f0e49e093203db304ecd134a7c34adf17e20c868bc958ed87054b"} Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.207189 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.311670 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts\") pod \"5d48de82-400d-41d1-a054-f451486e0ff5\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.311727 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cspw8\" (UniqueName: \"kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8\") pod \"5d48de82-400d-41d1-a054-f451486e0ff5\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.311774 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data\") pod \"5d48de82-400d-41d1-a054-f451486e0ff5\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.311816 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle\") pod \"5d48de82-400d-41d1-a054-f451486e0ff5\" (UID: \"5d48de82-400d-41d1-a054-f451486e0ff5\") " Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.316681 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8" (OuterVolumeSpecName: "kube-api-access-cspw8") pod "5d48de82-400d-41d1-a054-f451486e0ff5" (UID: "5d48de82-400d-41d1-a054-f451486e0ff5"). InnerVolumeSpecName "kube-api-access-cspw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.318299 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts" (OuterVolumeSpecName: "scripts") pod "5d48de82-400d-41d1-a054-f451486e0ff5" (UID: "5d48de82-400d-41d1-a054-f451486e0ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.338823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data" (OuterVolumeSpecName: "config-data") pod "5d48de82-400d-41d1-a054-f451486e0ff5" (UID: "5d48de82-400d-41d1-a054-f451486e0ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.342631 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d48de82-400d-41d1-a054-f451486e0ff5" (UID: "5d48de82-400d-41d1-a054-f451486e0ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.414351 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.414387 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.414401 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d48de82-400d-41d1-a054-f451486e0ff5-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.414413 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cspw8\" (UniqueName: \"kubernetes.io/projected/5d48de82-400d-41d1-a054-f451486e0ff5-kube-api-access-cspw8\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.896047 4861 generic.go:334] "Generic (PLEG): container finished" podID="1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" containerID="83605c45bb5cb59c7e5876bd4f566208ed4c6a08abfc7535626e8b49a51feb43" exitCode=0 Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.896156 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" event={"ID":"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea","Type":"ContainerDied","Data":"83605c45bb5cb59c7e5876bd4f566208ed4c6a08abfc7535626e8b49a51feb43"} Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.898893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerStarted","Data":"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1"} Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.898941 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerStarted","Data":"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49"} Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.901000 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qnt8d" event={"ID":"5d48de82-400d-41d1-a054-f451486e0ff5","Type":"ContainerDied","Data":"2c8165a2988a3adbb3ca6e36e6e70fb2c90a82e875e2c232323e1e99c09e65f8"} Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.901035 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8165a2988a3adbb3ca6e36e6e70fb2c90a82e875e2c232323e1e99c09e65f8" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.901098 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qnt8d" Mar 10 19:12:26 crc kubenswrapper[4861]: I0310 19:12:26.989668 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.989646167 podStartE2EDuration="2.989646167s" podCreationTimestamp="2026-03-10 19:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:26.950209702 +0000 UTC m=+1490.713645682" watchObservedRunningTime="2026-03-10 19:12:26.989646167 +0000 UTC m=+1490.753082137" Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.120328 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.120409 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.187437 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.207179 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.207461 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b29d5d26-c7f5-4556-9535-7743f991423a" containerName="nova-scheduler-scheduler" containerID="cri-o://8ed9540fb7fd8e9e0daaad510061ddbf38d590a7b63a7e35e847f4b947d7b36c" gracePeriod=30 Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.238874 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.546860 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.631491 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.631832 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="dnsmasq-dns" containerID="cri-o://859b5cde1af48351f056dde64d7287f25a5a5969897f2387b86f15568becd4b9" gracePeriod=10 Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.910287 4861 generic.go:334] "Generic (PLEG): container finished" podID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerID="859b5cde1af48351f056dde64d7287f25a5a5969897f2387b86f15568becd4b9" exitCode=0 Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.911487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" event={"ID":"1839f77d-af3b-46f9-87f9-3fb81e3daa90","Type":"ContainerDied","Data":"859b5cde1af48351f056dde64d7287f25a5a5969897f2387b86f15568becd4b9"} Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.911543 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-log" containerID="cri-o://cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566" gracePeriod=30 Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.911673 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-api" containerID="cri-o://1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555" gracePeriod=30 Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.933758 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Mar 10 19:12:27 crc kubenswrapper[4861]: I0310 19:12:27.934135 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.165228 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.253995 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.254103 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.254179 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.254270 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jq9r\" (UniqueName: \"kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.254460 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.254489 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0\") pod \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\" (UID: \"1839f77d-af3b-46f9-87f9-3fb81e3daa90\") " Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.280833 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r" (OuterVolumeSpecName: "kube-api-access-5jq9r") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "kube-api-access-5jq9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.307823 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config" (OuterVolumeSpecName: "config") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.313355 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.317992 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.331419 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.335127 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1839f77d-af3b-46f9-87f9-3fb81e3daa90" (UID: "1839f77d-af3b-46f9-87f9-3fb81e3daa90"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357468 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357501 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357516 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357528 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357540 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1839f77d-af3b-46f9-87f9-3fb81e3daa90-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.357552 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jq9r\" (UniqueName: \"kubernetes.io/projected/1839f77d-af3b-46f9-87f9-3fb81e3daa90-kube-api-access-5jq9r\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.919367 4861 generic.go:334] "Generic (PLEG): container finished" podID="b29d5d26-c7f5-4556-9535-7743f991423a" containerID="8ed9540fb7fd8e9e0daaad510061ddbf38d590a7b63a7e35e847f4b947d7b36c" exitCode=0 Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.919738 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b29d5d26-c7f5-4556-9535-7743f991423a","Type":"ContainerDied","Data":"8ed9540fb7fd8e9e0daaad510061ddbf38d590a7b63a7e35e847f4b947d7b36c"} Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.922370 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.924095 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-66b8w" event={"ID":"1839f77d-af3b-46f9-87f9-3fb81e3daa90","Type":"ContainerDied","Data":"a02e8a5252df2dd6178d8c517abf7cb8bdba078aa67279a1143b2748a4f97882"} Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.924122 4861 scope.go:117] "RemoveContainer" containerID="859b5cde1af48351f056dde64d7287f25a5a5969897f2387b86f15568becd4b9" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.935366 4861 generic.go:334] "Generic (PLEG): container finished" podID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerID="cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566" exitCode=143 Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.935702 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-log" containerID="cri-o://6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" gracePeriod=30 Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.936125 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerDied","Data":"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566"} Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.936574 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-metadata" containerID="cri-o://c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" gracePeriod=30 Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.973576 4861 scope.go:117] "RemoveContainer" containerID="23a622a31ff26325c70d4dd50250f161ca20a59ae12c3c4588e78f463eace263" Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.979544 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:12:28 crc kubenswrapper[4861]: I0310 19:12:28.987446 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-66b8w"] Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.198189 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.237170 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.293619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data\") pod \"b29d5d26-c7f5-4556-9535-7743f991423a\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.293741 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle\") pod \"b29d5d26-c7f5-4556-9535-7743f991423a\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.293784 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts\") pod \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.293856 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle\") pod \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.293906 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c9fh\" (UniqueName: \"kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh\") pod \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.294057 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhc8p\" (UniqueName: \"kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p\") pod \"b29d5d26-c7f5-4556-9535-7743f991423a\" (UID: \"b29d5d26-c7f5-4556-9535-7743f991423a\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.294133 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data\") pod \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\" (UID: \"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.299934 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts" (OuterVolumeSpecName: "scripts") pod "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" (UID: "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.302131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p" (OuterVolumeSpecName: "kube-api-access-fhc8p") pod "b29d5d26-c7f5-4556-9535-7743f991423a" (UID: "b29d5d26-c7f5-4556-9535-7743f991423a"). InnerVolumeSpecName "kube-api-access-fhc8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.302692 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh" (OuterVolumeSpecName: "kube-api-access-6c9fh") pod "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" (UID: "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea"). InnerVolumeSpecName "kube-api-access-6c9fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.323312 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b29d5d26-c7f5-4556-9535-7743f991423a" (UID: "b29d5d26-c7f5-4556-9535-7743f991423a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.337545 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" (UID: "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.337893 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data" (OuterVolumeSpecName: "config-data") pod "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" (UID: "1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.341168 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.342305 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data" (OuterVolumeSpecName: "config-data") pod "b29d5d26-c7f5-4556-9535-7743f991423a" (UID: "b29d5d26-c7f5-4556-9535-7743f991423a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.395212 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs\") pod \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.395552 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data\") pod \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.395588 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle\") pod \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.395606 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2h2b\" (UniqueName: \"kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b\") pod \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.395641 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs\") pod \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\" (UID: \"ffd067d1-6fba-489c-aa9c-7a224ca99deb\") " Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396042 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhc8p\" (UniqueName: \"kubernetes.io/projected/b29d5d26-c7f5-4556-9535-7743f991423a-kube-api-access-fhc8p\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396054 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396063 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396071 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29d5d26-c7f5-4556-9535-7743f991423a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396080 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396090 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396098 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c9fh\" (UniqueName: \"kubernetes.io/projected/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea-kube-api-access-6c9fh\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.396317 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs" (OuterVolumeSpecName: "logs") pod "ffd067d1-6fba-489c-aa9c-7a224ca99deb" (UID: "ffd067d1-6fba-489c-aa9c-7a224ca99deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.398826 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b" (OuterVolumeSpecName: "kube-api-access-f2h2b") pod "ffd067d1-6fba-489c-aa9c-7a224ca99deb" (UID: "ffd067d1-6fba-489c-aa9c-7a224ca99deb"). InnerVolumeSpecName "kube-api-access-f2h2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.427793 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data" (OuterVolumeSpecName: "config-data") pod "ffd067d1-6fba-489c-aa9c-7a224ca99deb" (UID: "ffd067d1-6fba-489c-aa9c-7a224ca99deb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.441942 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffd067d1-6fba-489c-aa9c-7a224ca99deb" (UID: "ffd067d1-6fba-489c-aa9c-7a224ca99deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.442387 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ffd067d1-6fba-489c-aa9c-7a224ca99deb" (UID: "ffd067d1-6fba-489c-aa9c-7a224ca99deb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.498132 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.498167 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.498183 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2h2b\" (UniqueName: \"kubernetes.io/projected/ffd067d1-6fba-489c-aa9c-7a224ca99deb-kube-api-access-f2h2b\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.498195 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffd067d1-6fba-489c-aa9c-7a224ca99deb-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.498207 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffd067d1-6fba-489c-aa9c-7a224ca99deb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.948400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" event={"ID":"1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea","Type":"ContainerDied","Data":"3f5abbe4b49540fcdbead0d0d1c28dca48b23bbc655d98d494d7ccd0f95c8732"} Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.948791 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5abbe4b49540fcdbead0d0d1c28dca48b23bbc655d98d494d7ccd0f95c8732" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.948450 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cdqvp" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.951400 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerID="c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" exitCode=0 Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.951558 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerID="6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" exitCode=143 Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.951541 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.951531 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerDied","Data":"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1"} Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.952343 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerDied","Data":"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49"} Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.952376 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffd067d1-6fba-489c-aa9c-7a224ca99deb","Type":"ContainerDied","Data":"4c462a410f7f0e49e093203db304ecd134a7c34adf17e20c868bc958ed87054b"} Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.952482 4861 scope.go:117] "RemoveContainer" containerID="c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.954277 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b29d5d26-c7f5-4556-9535-7743f991423a","Type":"ContainerDied","Data":"936883600dfcbdf66251f9c74260a955d3d82ae6b1f3d1ebdcb6e1d25bb9c2d1"} Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.954355 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:29 crc kubenswrapper[4861]: I0310 19:12:29.989939 4861 scope.go:117] "RemoveContainer" containerID="6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.007974 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.024373 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.027877 4861 scope.go:117] "RemoveContainer" containerID="c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.028337 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1\": container with ID starting with c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1 not found: ID does not exist" containerID="c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.028374 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1"} err="failed to get container status \"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1\": rpc error: code = NotFound desc = could not find container \"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1\": container with ID starting with c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1 not found: ID does not exist" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.028401 4861 scope.go:117] "RemoveContainer" containerID="6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.028785 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49\": container with ID starting with 6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49 not found: ID does not exist" containerID="6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.028856 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49"} err="failed to get container status \"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49\": rpc error: code = NotFound desc = could not find container \"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49\": container with ID starting with 6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49 not found: ID does not exist" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.028924 4861 scope.go:117] "RemoveContainer" containerID="c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.029413 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1"} err="failed to get container status \"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1\": rpc error: code = NotFound desc = could not find container \"c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1\": container with ID starting with c5a55607ca787276513fa440b5f1bf875b2bbfb496f55f869caee7f339487bd1 not found: ID does not exist" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.029462 4861 scope.go:117] "RemoveContainer" containerID="6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.030026 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49"} err="failed to get container status \"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49\": rpc error: code = NotFound desc = could not find container \"6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49\": container with ID starting with 6c8cb06a57438b1f116a207056d9f8242e58a121d97216df52945cc2fa267c49 not found: ID does not exist" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.030072 4861 scope.go:117] "RemoveContainer" containerID="8ed9540fb7fd8e9e0daaad510061ddbf38d590a7b63a7e35e847f4b947d7b36c" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.041630 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.048849 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055104 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055610 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="init" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055632 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="init" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055656 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29d5d26-c7f5-4556-9535-7743f991423a" containerName="nova-scheduler-scheduler" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055665 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29d5d26-c7f5-4556-9535-7743f991423a" containerName="nova-scheduler-scheduler" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055688 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" containerName="nova-cell1-conductor-db-sync" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055697 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" containerName="nova-cell1-conductor-db-sync" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055735 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-log" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055744 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-log" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055769 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-metadata" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055777 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-metadata" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055794 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="dnsmasq-dns" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055802 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="dnsmasq-dns" Mar 10 19:12:30 crc kubenswrapper[4861]: E0310 19:12:30.055817 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d48de82-400d-41d1-a054-f451486e0ff5" containerName="nova-manage" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.055825 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d48de82-400d-41d1-a054-f451486e0ff5" containerName="nova-manage" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056051 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-metadata" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056076 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29d5d26-c7f5-4556-9535-7743f991423a" containerName="nova-scheduler-scheduler" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056097 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" containerName="nova-cell1-conductor-db-sync" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056120 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d48de82-400d-41d1-a054-f451486e0ff5" containerName="nova-manage" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056129 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" containerName="nova-metadata-log" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056144 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" containerName="dnsmasq-dns" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.056925 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.059605 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.073870 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.082817 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.084652 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.087130 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.087274 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.097509 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.111259 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.111370 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppn6\" (UniqueName: \"kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.111448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213204 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213285 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213313 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213394 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snq7k\" (UniqueName: \"kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213543 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213571 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.213619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppn6\" (UniqueName: \"kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.218972 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.221986 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.244094 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppn6\" (UniqueName: \"kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6\") pod \"nova-scheduler-0\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.315395 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snq7k\" (UniqueName: \"kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.315527 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.316316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.316366 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.316455 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.317225 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.323470 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.325027 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.331009 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.345438 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.346587 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.349527 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.357903 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snq7k\" (UniqueName: \"kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k\") pod \"nova-metadata-0\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.368566 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.382280 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.402736 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.418226 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.418381 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.418428 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqcr\" (UniqueName: \"kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.520739 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.520844 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqcr\" (UniqueName: \"kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.520875 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.526548 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.528625 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.545384 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqcr\" (UniqueName: \"kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr\") pod \"nova-cell1-conductor-0\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.718787 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.878137 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.944158 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.993589 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1839f77d-af3b-46f9-87f9-3fb81e3daa90" path="/var/lib/kubelet/pods/1839f77d-af3b-46f9-87f9-3fb81e3daa90/volumes" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.995941 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29d5d26-c7f5-4556-9535-7743f991423a" path="/var/lib/kubelet/pods/b29d5d26-c7f5-4556-9535-7743f991423a/volumes" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.997576 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd067d1-6fba-489c-aa9c-7a224ca99deb" path="/var/lib/kubelet/pods/ffd067d1-6fba-489c-aa9c-7a224ca99deb/volumes" Mar 10 19:12:30 crc kubenswrapper[4861]: I0310 19:12:30.999554 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerStarted","Data":"93e2bc0d1cfc328267574eb81002520766140adae1be3a7cd655ae90f174ccb3"} Mar 10 19:12:31 crc kubenswrapper[4861]: I0310 19:12:31.002959 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95db3120-a065-4f8b-8146-ab4b9f399177","Type":"ContainerStarted","Data":"d4249f5e72765d3294d937f63c8fa2b062f95093dce65b717446ca83d2ce4de3"} Mar 10 19:12:31 crc kubenswrapper[4861]: W0310 19:12:31.235187 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd523d4ef_a5fe_47c3_b174_b2aefc766755.slice/crio-843509b73f7669ec9455b465b0c4853c22290961c0ede987107a189ef525180c WatchSource:0}: Error finding container 843509b73f7669ec9455b465b0c4853c22290961c0ede987107a189ef525180c: Status 404 returned error can't find the container with id 843509b73f7669ec9455b465b0c4853c22290961c0ede987107a189ef525180c Mar 10 19:12:31 crc kubenswrapper[4861]: I0310 19:12:31.236325 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.015155 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95db3120-a065-4f8b-8146-ab4b9f399177","Type":"ContainerStarted","Data":"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4"} Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.019198 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d523d4ef-a5fe-47c3-b174-b2aefc766755","Type":"ContainerStarted","Data":"3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc"} Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.019451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d523d4ef-a5fe-47c3-b174-b2aefc766755","Type":"ContainerStarted","Data":"843509b73f7669ec9455b465b0c4853c22290961c0ede987107a189ef525180c"} Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.020937 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.021165 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerStarted","Data":"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924"} Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.021315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerStarted","Data":"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb"} Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.043732 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.043678606 podStartE2EDuration="3.043678606s" podCreationTimestamp="2026-03-10 19:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:32.03676008 +0000 UTC m=+1495.800196150" watchObservedRunningTime="2026-03-10 19:12:32.043678606 +0000 UTC m=+1495.807114606" Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.082385 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.082363507 podStartE2EDuration="2.082363507s" podCreationTimestamp="2026-03-10 19:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:32.06216282 +0000 UTC m=+1495.825598840" watchObservedRunningTime="2026-03-10 19:12:32.082363507 +0000 UTC m=+1495.845799477" Mar 10 19:12:32 crc kubenswrapper[4861]: I0310 19:12:32.088241 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.088230452 podStartE2EDuration="2.088230452s" podCreationTimestamp="2026-03-10 19:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:32.079256716 +0000 UTC m=+1495.842692686" watchObservedRunningTime="2026-03-10 19:12:32.088230452 +0000 UTC m=+1495.851666422" Mar 10 19:12:33 crc kubenswrapper[4861]: I0310 19:12:33.854549 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:33 crc kubenswrapper[4861]: I0310 19:12:33.998384 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data\") pod \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " Mar 10 19:12:33 crc kubenswrapper[4861]: I0310 19:12:33.998900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle\") pod \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:33.999046 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gqpp\" (UniqueName: \"kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp\") pod \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:33.999104 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs\") pod \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\" (UID: \"b44343a9-b5bd-4f04-b33f-73abd4d4a553\") " Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.000183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs" (OuterVolumeSpecName: "logs") pod "b44343a9-b5bd-4f04-b33f-73abd4d4a553" (UID: "b44343a9-b5bd-4f04-b33f-73abd4d4a553"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.004233 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp" (OuterVolumeSpecName: "kube-api-access-8gqpp") pod "b44343a9-b5bd-4f04-b33f-73abd4d4a553" (UID: "b44343a9-b5bd-4f04-b33f-73abd4d4a553"). InnerVolumeSpecName "kube-api-access-8gqpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054228 4861 generic.go:334] "Generic (PLEG): container finished" podID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerID="1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555" exitCode=0 Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054314 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054305 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerDied","Data":"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555"} Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data" (OuterVolumeSpecName: "config-data") pod "b44343a9-b5bd-4f04-b33f-73abd4d4a553" (UID: "b44343a9-b5bd-4f04-b33f-73abd4d4a553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054673 4861 scope.go:117] "RemoveContainer" containerID="1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.054654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b44343a9-b5bd-4f04-b33f-73abd4d4a553","Type":"ContainerDied","Data":"90b5d2ea45869ea075f0e158dbf492784c05dfdc2119201e9c7c516a8df31e1d"} Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.062890 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b44343a9-b5bd-4f04-b33f-73abd4d4a553" (UID: "b44343a9-b5bd-4f04-b33f-73abd4d4a553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.101827 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.102007 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44343a9-b5bd-4f04-b33f-73abd4d4a553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.102124 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gqpp\" (UniqueName: \"kubernetes.io/projected/b44343a9-b5bd-4f04-b33f-73abd4d4a553-kube-api-access-8gqpp\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.102254 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b44343a9-b5bd-4f04-b33f-73abd4d4a553-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.135705 4861 scope.go:117] "RemoveContainer" containerID="cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.167014 4861 scope.go:117] "RemoveContainer" containerID="1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555" Mar 10 19:12:34 crc kubenswrapper[4861]: E0310 19:12:34.167648 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555\": container with ID starting with 1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555 not found: ID does not exist" containerID="1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.167700 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555"} err="failed to get container status \"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555\": rpc error: code = NotFound desc = could not find container \"1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555\": container with ID starting with 1cebaff482369d0358e1d6769ac8bb22633b34cabd56c9ba929456e0782b5555 not found: ID does not exist" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.167756 4861 scope.go:117] "RemoveContainer" containerID="cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566" Mar 10 19:12:34 crc kubenswrapper[4861]: E0310 19:12:34.168295 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566\": container with ID starting with cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566 not found: ID does not exist" containerID="cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.168473 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566"} err="failed to get container status \"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566\": rpc error: code = NotFound desc = could not find container \"cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566\": container with ID starting with cda5ad93902931cbb82f5e593d9e1d7562e143a8413c472720712f7246bf4566 not found: ID does not exist" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.442303 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.477836 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.486803 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:34 crc kubenswrapper[4861]: E0310 19:12:34.487397 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-log" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.487482 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-log" Mar 10 19:12:34 crc kubenswrapper[4861]: E0310 19:12:34.487560 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-api" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.487623 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-api" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.487934 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-log" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.488033 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" containerName="nova-api-api" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.489316 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.491286 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.495279 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.636356 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.636580 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.637074 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.637168 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pg5\" (UniqueName: \"kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.739566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.739655 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.739746 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.739777 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pg5\" (UniqueName: \"kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.740273 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.744899 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.745435 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.759819 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pg5\" (UniqueName: \"kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5\") pod \"nova-api-0\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.805742 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.955867 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 10 19:12:34 crc kubenswrapper[4861]: I0310 19:12:34.971161 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44343a9-b5bd-4f04-b33f-73abd4d4a553" path="/var/lib/kubelet/pods/b44343a9-b5bd-4f04-b33f-73abd4d4a553/volumes" Mar 10 19:12:35 crc kubenswrapper[4861]: I0310 19:12:35.308967 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:35 crc kubenswrapper[4861]: W0310 19:12:35.318978 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165e122d_1525_44d5_adcf_28470f33e74d.slice/crio-0dfe1f64bd6d32ecc78ab6a6dffa5af35cc3ff06a28751260057ae44d69b811e WatchSource:0}: Error finding container 0dfe1f64bd6d32ecc78ab6a6dffa5af35cc3ff06a28751260057ae44d69b811e: Status 404 returned error can't find the container with id 0dfe1f64bd6d32ecc78ab6a6dffa5af35cc3ff06a28751260057ae44d69b811e Mar 10 19:12:35 crc kubenswrapper[4861]: I0310 19:12:35.383171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 10 19:12:35 crc kubenswrapper[4861]: I0310 19:12:35.403836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 10 19:12:35 crc kubenswrapper[4861]: I0310 19:12:35.403979 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 10 19:12:36 crc kubenswrapper[4861]: I0310 19:12:36.086400 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerStarted","Data":"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d"} Mar 10 19:12:36 crc kubenswrapper[4861]: I0310 19:12:36.086748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerStarted","Data":"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb"} Mar 10 19:12:36 crc kubenswrapper[4861]: I0310 19:12:36.086773 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerStarted","Data":"0dfe1f64bd6d32ecc78ab6a6dffa5af35cc3ff06a28751260057ae44d69b811e"} Mar 10 19:12:36 crc kubenswrapper[4861]: I0310 19:12:36.110777 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.110703684 podStartE2EDuration="2.110703684s" podCreationTimestamp="2026-03-10 19:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:36.104952351 +0000 UTC m=+1499.868388331" watchObservedRunningTime="2026-03-10 19:12:36.110703684 +0000 UTC m=+1499.874139684" Mar 10 19:12:38 crc kubenswrapper[4861]: I0310 19:12:38.860317 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:38 crc kubenswrapper[4861]: I0310 19:12:38.861014 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4793319d-2e64-4fda-9df8-9a97ff264050" containerName="kube-state-metrics" containerID="cri-o://9b6a3cf511e38106b8f47160427dc36b1287d8d3fd9359ffd37bdde81a64a865" gracePeriod=30 Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.135505 4861 generic.go:334] "Generic (PLEG): container finished" podID="4793319d-2e64-4fda-9df8-9a97ff264050" containerID="9b6a3cf511e38106b8f47160427dc36b1287d8d3fd9359ffd37bdde81a64a865" exitCode=2 Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.135656 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4793319d-2e64-4fda-9df8-9a97ff264050","Type":"ContainerDied","Data":"9b6a3cf511e38106b8f47160427dc36b1287d8d3fd9359ffd37bdde81a64a865"} Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.394030 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.558836 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkg4b\" (UniqueName: \"kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b\") pod \"4793319d-2e64-4fda-9df8-9a97ff264050\" (UID: \"4793319d-2e64-4fda-9df8-9a97ff264050\") " Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.565977 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b" (OuterVolumeSpecName: "kube-api-access-bkg4b") pod "4793319d-2e64-4fda-9df8-9a97ff264050" (UID: "4793319d-2e64-4fda-9df8-9a97ff264050"). InnerVolumeSpecName "kube-api-access-bkg4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:39 crc kubenswrapper[4861]: I0310 19:12:39.661055 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkg4b\" (UniqueName: \"kubernetes.io/projected/4793319d-2e64-4fda-9df8-9a97ff264050-kube-api-access-bkg4b\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.152500 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4793319d-2e64-4fda-9df8-9a97ff264050","Type":"ContainerDied","Data":"2d8aaad856b4488c5eb5d11201c50f387cf6749255371f7a400f16c06feddca3"} Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.152577 4861 scope.go:117] "RemoveContainer" containerID="9b6a3cf511e38106b8f47160427dc36b1287d8d3fd9359ffd37bdde81a64a865" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.152622 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.218839 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.236566 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.269872 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:40 crc kubenswrapper[4861]: E0310 19:12:40.270540 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4793319d-2e64-4fda-9df8-9a97ff264050" containerName="kube-state-metrics" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.270571 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4793319d-2e64-4fda-9df8-9a97ff264050" containerName="kube-state-metrics" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.270919 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4793319d-2e64-4fda-9df8-9a97ff264050" containerName="kube-state-metrics" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.271982 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.275391 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.275803 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.311020 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.377537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.377890 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbk2f\" (UniqueName: \"kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.377973 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.378284 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.382982 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.403218 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.403253 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.424657 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.480331 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.480450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.480542 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbk2f\" (UniqueName: \"kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.480564 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.485786 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.486015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.487462 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.501333 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbk2f\" (UniqueName: \"kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f\") pod \"kube-state-metrics-0\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.602635 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.691244 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.694470 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-central-agent" containerID="cri-o://07667775b38ed6a21232726c2f1719e926731eb5a7eee20c7a79075101659dd2" gracePeriod=30 Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.694539 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="proxy-httpd" containerID="cri-o://aec68c03619f69a3eb371ec522aa257bf49ad74916348fcb5bdc5a8ffc86c627" gracePeriod=30 Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.694565 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-notification-agent" containerID="cri-o://2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa" gracePeriod=30 Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.694696 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="sg-core" containerID="cri-o://bb6f71b5360bba8bdcea9ee90a666f5c40c9a425f05d5c8ad3c7dd33a254a044" gracePeriod=30 Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.761349 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 10 19:12:40 crc kubenswrapper[4861]: I0310 19:12:40.967727 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4793319d-2e64-4fda-9df8-9a97ff264050" path="/var/lib/kubelet/pods/4793319d-2e64-4fda-9df8-9a97ff264050/volumes" Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.111407 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223066 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerID="aec68c03619f69a3eb371ec522aa257bf49ad74916348fcb5bdc5a8ffc86c627" exitCode=0 Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223100 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerID="bb6f71b5360bba8bdcea9ee90a666f5c40c9a425f05d5c8ad3c7dd33a254a044" exitCode=2 Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223109 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerID="07667775b38ed6a21232726c2f1719e926731eb5a7eee20c7a79075101659dd2" exitCode=0 Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerDied","Data":"aec68c03619f69a3eb371ec522aa257bf49ad74916348fcb5bdc5a8ffc86c627"} Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerDied","Data":"bb6f71b5360bba8bdcea9ee90a666f5c40c9a425f05d5c8ad3c7dd33a254a044"} Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.223234 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerDied","Data":"07667775b38ed6a21232726c2f1719e926731eb5a7eee20c7a79075101659dd2"} Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.242208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2","Type":"ContainerStarted","Data":"167c7d0b8e951a07b4abc02208342afe802d76dcc658ea89add07c866ef620cd"} Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.313252 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.416880 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:12:41 crc kubenswrapper[4861]: I0310 19:12:41.416890 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:12:42 crc kubenswrapper[4861]: I0310 19:12:42.256995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2","Type":"ContainerStarted","Data":"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f"} Mar 10 19:12:42 crc kubenswrapper[4861]: I0310 19:12:42.258270 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 10 19:12:42 crc kubenswrapper[4861]: I0310 19:12:42.287424 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.750986307 podStartE2EDuration="2.287375831s" podCreationTimestamp="2026-03-10 19:12:40 +0000 UTC" firstStartedPulling="2026-03-10 19:12:41.142111458 +0000 UTC m=+1504.905547418" lastFinishedPulling="2026-03-10 19:12:41.678500982 +0000 UTC m=+1505.441936942" observedRunningTime="2026-03-10 19:12:42.277630279 +0000 UTC m=+1506.041066299" watchObservedRunningTime="2026-03-10 19:12:42.287375831 +0000 UTC m=+1506.050811831" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.271563 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerID="2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa" exitCode=0 Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.272635 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerDied","Data":"2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa"} Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.531003 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.634596 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.634693 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.634746 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.634772 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.634951 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.635001 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkds8\" (UniqueName: \"kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.635025 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml\") pod \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\" (UID: \"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab\") " Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.635747 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.635813 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.640482 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts" (OuterVolumeSpecName: "scripts") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.642339 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8" (OuterVolumeSpecName: "kube-api-access-dkds8") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "kube-api-access-dkds8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.688099 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.737385 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkds8\" (UniqueName: \"kubernetes.io/projected/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-kube-api-access-dkds8\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.737414 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.737439 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.737450 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.737459 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.748151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.749400 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data" (OuterVolumeSpecName: "config-data") pod "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" (UID: "b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.838418 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:43 crc kubenswrapper[4861]: I0310 19:12:43.838453 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.290641 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.290649 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab","Type":"ContainerDied","Data":"a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09"} Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.291442 4861 scope.go:117] "RemoveContainer" containerID="aec68c03619f69a3eb371ec522aa257bf49ad74916348fcb5bdc5a8ffc86c627" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.335246 4861 scope.go:117] "RemoveContainer" containerID="bb6f71b5360bba8bdcea9ee90a666f5c40c9a425f05d5c8ad3c7dd33a254a044" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.349397 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.362563 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.372844 4861 scope.go:117] "RemoveContainer" containerID="2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.375136 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:44 crc kubenswrapper[4861]: E0310 19:12:44.375545 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-notification-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.375567 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-notification-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: E0310 19:12:44.375583 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-central-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.375594 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-central-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: E0310 19:12:44.375619 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="proxy-httpd" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.375627 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="proxy-httpd" Mar 10 19:12:44 crc kubenswrapper[4861]: E0310 19:12:44.375648 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="sg-core" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.375655 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="sg-core" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.376311 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="sg-core" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.376338 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-central-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.376368 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="ceilometer-notification-agent" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.376386 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" containerName="proxy-httpd" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.379442 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.386508 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.386850 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.386906 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.389001 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.416424 4861 scope.go:117] "RemoveContainer" containerID="07667775b38ed6a21232726c2f1719e926731eb5a7eee20c7a79075101659dd2" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450316 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450427 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qbz\" (UniqueName: \"kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450607 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450685 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450792 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450836 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.450852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552349 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552404 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552432 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552515 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qbz\" (UniqueName: \"kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552666 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.552695 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.553370 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.553398 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.557357 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.560064 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.561012 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.561353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.568022 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.578324 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qbz\" (UniqueName: \"kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz\") pod \"ceilometer-0\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.712374 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.806414 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.806490 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:12:44 crc kubenswrapper[4861]: I0310 19:12:44.973593 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab" path="/var/lib/kubelet/pods/b5a6522b-3ccb-4d9b-b48d-dce8c34f3eab/volumes" Mar 10 19:12:45 crc kubenswrapper[4861]: I0310 19:12:45.044880 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:45 crc kubenswrapper[4861]: I0310 19:12:45.304837 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerStarted","Data":"2837d16a1c91e589a40aeb38f557c9cd48e281a43d2539816c2df2f827abf420"} Mar 10 19:12:45 crc kubenswrapper[4861]: I0310 19:12:45.890008 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 19:12:45 crc kubenswrapper[4861]: I0310 19:12:45.890049 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 19:12:46 crc kubenswrapper[4861]: I0310 19:12:46.320137 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerStarted","Data":"04ce0706c6df9f9f618c249d98f5c441eb8d9452970f2233d8201342509548ad"} Mar 10 19:12:47 crc kubenswrapper[4861]: I0310 19:12:47.330006 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerStarted","Data":"49fe551f811518da53606cf89dd2db72b7f22f467a852d6c1dc6cddda9fb1eec"} Mar 10 19:12:48 crc kubenswrapper[4861]: I0310 19:12:48.341838 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerStarted","Data":"2696f61078f0a1a74b63a575d712dd26e91f63e6a86f67da6deadc2dfdfe94cc"} Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.125539 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.128629 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.168307 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.168855 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.169007 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wkg\" (UniqueName: \"kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.216231 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.270755 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7wkg\" (UniqueName: \"kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.271205 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.271299 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.271884 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.272023 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.290894 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7wkg\" (UniqueName: \"kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg\") pod \"certified-operators-jlr7v\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.480160 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.616673 4861 scope.go:117] "RemoveContainer" containerID="bf80b003e3d2a7edc4ed173b54216a47f70559f978c7bb9fbc3f53c42142760f" Mar 10 19:12:49 crc kubenswrapper[4861]: I0310 19:12:49.974417 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.362391 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerStarted","Data":"2873c36f54704a23a3c98319164da99274e57bcb8f5d04bfec19c8a4567f332d"} Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.363668 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.365067 4861 generic.go:334] "Generic (PLEG): container finished" podID="7025462d-4b03-4883-9b6f-73874a4760b6" containerID="6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea" exitCode=0 Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.365105 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerDied","Data":"6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea"} Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.365121 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerStarted","Data":"ff4d112d59d7807f7a50a1175b14218643d05c0be7353ff9e0908c3455ad1783"} Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.395222 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.229025644 podStartE2EDuration="6.3952046s" podCreationTimestamp="2026-03-10 19:12:44 +0000 UTC" firstStartedPulling="2026-03-10 19:12:45.053553297 +0000 UTC m=+1508.816989267" lastFinishedPulling="2026-03-10 19:12:49.219732253 +0000 UTC m=+1512.983168223" observedRunningTime="2026-03-10 19:12:50.385349976 +0000 UTC m=+1514.148785936" watchObservedRunningTime="2026-03-10 19:12:50.3952046 +0000 UTC m=+1514.158640560" Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.410683 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.421758 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.426075 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 10 19:12:50 crc kubenswrapper[4861]: I0310 19:12:50.614343 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 10 19:12:51 crc kubenswrapper[4861]: I0310 19:12:51.381066 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerStarted","Data":"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e"} Mar 10 19:12:51 crc kubenswrapper[4861]: I0310 19:12:51.388917 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 10 19:12:51 crc kubenswrapper[4861]: E0310 19:12:51.864058 4861 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a6522b_3ccb_4d9b_b48d_dce8c34f3eab.slice/crio-a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09: Error finding container a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09: Status 404 returned error can't find the container with id a5210cbe5dbf1cc1e370c6f7c103876128db9be174e4611f14dcedafb4348b09 Mar 10 19:12:51 crc kubenswrapper[4861]: I0310 19:12:51.991601 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:12:51 crc kubenswrapper[4861]: I0310 19:12:51.991647 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:12:52 crc kubenswrapper[4861]: E0310 19:12:52.083285 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a6522b_3ccb_4d9b_b48d_dce8c34f3eab.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a6522b_3ccb_4d9b_b48d_dce8c34f3eab.slice/crio-conmon-2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a6522b_3ccb_4d9b_b48d_dce8c34f3eab.slice/crio-2d61b389d848ac0fe241eee6fd117d6cb26b15215325ec3da5afdb59ec6c22fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod366c866d_7c07_4050_a22f_ddc4421c0447.slice/crio-c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod366c866d_7c07_4050_a22f_ddc4421c0447.slice/crio-conmon-c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3.scope\": RecentStats: unable to find data in memory cache]" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.197889 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.253050 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle\") pod \"366c866d-7c07-4050-a22f-ddc4421c0447\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.253120 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbq4\" (UniqueName: \"kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4\") pod \"366c866d-7c07-4050-a22f-ddc4421c0447\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.253274 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data\") pod \"366c866d-7c07-4050-a22f-ddc4421c0447\" (UID: \"366c866d-7c07-4050-a22f-ddc4421c0447\") " Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.265135 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4" (OuterVolumeSpecName: "kube-api-access-5qbq4") pod "366c866d-7c07-4050-a22f-ddc4421c0447" (UID: "366c866d-7c07-4050-a22f-ddc4421c0447"). InnerVolumeSpecName "kube-api-access-5qbq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.278688 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "366c866d-7c07-4050-a22f-ddc4421c0447" (UID: "366c866d-7c07-4050-a22f-ddc4421c0447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.299882 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data" (OuterVolumeSpecName: "config-data") pod "366c866d-7c07-4050-a22f-ddc4421c0447" (UID: "366c866d-7c07-4050-a22f-ddc4421c0447"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.356160 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.356207 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c866d-7c07-4050-a22f-ddc4421c0447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.356220 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbq4\" (UniqueName: \"kubernetes.io/projected/366c866d-7c07-4050-a22f-ddc4421c0447-kube-api-access-5qbq4\") on node \"crc\" DevicePath \"\"" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.391629 4861 generic.go:334] "Generic (PLEG): container finished" podID="366c866d-7c07-4050-a22f-ddc4421c0447" containerID="c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3" exitCode=137 Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.391723 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"366c866d-7c07-4050-a22f-ddc4421c0447","Type":"ContainerDied","Data":"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3"} Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.391748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"366c866d-7c07-4050-a22f-ddc4421c0447","Type":"ContainerDied","Data":"5763971c7d39d52b22d6756d5e9439c8519572edf25e55666af184bfc5ab1ce7"} Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.391764 4861 scope.go:117] "RemoveContainer" containerID="c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.391784 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.403236 4861 generic.go:334] "Generic (PLEG): container finished" podID="7025462d-4b03-4883-9b6f-73874a4760b6" containerID="6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e" exitCode=0 Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.403386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerDied","Data":"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e"} Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.433654 4861 scope.go:117] "RemoveContainer" containerID="c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3" Mar 10 19:12:52 crc kubenswrapper[4861]: E0310 19:12:52.434122 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3\": container with ID starting with c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3 not found: ID does not exist" containerID="c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.434154 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3"} err="failed to get container status \"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3\": rpc error: code = NotFound desc = could not find container \"c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3\": container with ID starting with c80dba858eaf69e13c7b64ab845832510cd27a1aaa334a69cfabf7f52a676df3 not found: ID does not exist" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.456868 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.471640 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.487475 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:52 crc kubenswrapper[4861]: E0310 19:12:52.488017 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366c866d-7c07-4050-a22f-ddc4421c0447" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.488034 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="366c866d-7c07-4050-a22f-ddc4421c0447" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.488278 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="366c866d-7c07-4050-a22f-ddc4421c0447" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.488996 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.498953 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.499371 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.501640 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.505575 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.563749 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.564321 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.564423 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpqk\" (UniqueName: \"kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.564864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.564940 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.666917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.667008 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpqk\" (UniqueName: \"kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.667084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.667117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.667344 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.674027 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.674640 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.675946 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.676192 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.692248 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpqk\" (UniqueName: \"kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.824153 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:52 crc kubenswrapper[4861]: I0310 19:12:52.993985 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366c866d-7c07-4050-a22f-ddc4421c0447" path="/var/lib/kubelet/pods/366c866d-7c07-4050-a22f-ddc4421c0447/volumes" Mar 10 19:12:53 crc kubenswrapper[4861]: I0310 19:12:53.344015 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:12:53 crc kubenswrapper[4861]: W0310 19:12:53.350329 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b1af633_31f5_4658_bda4_fc9c010d6280.slice/crio-78ebc06f82f87acf408c5782b175e02038202680597f4e968740be0ad4b62432 WatchSource:0}: Error finding container 78ebc06f82f87acf408c5782b175e02038202680597f4e968740be0ad4b62432: Status 404 returned error can't find the container with id 78ebc06f82f87acf408c5782b175e02038202680597f4e968740be0ad4b62432 Mar 10 19:12:53 crc kubenswrapper[4861]: I0310 19:12:53.414287 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerStarted","Data":"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0"} Mar 10 19:12:53 crc kubenswrapper[4861]: I0310 19:12:53.416239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b1af633-31f5-4658-bda4-fc9c010d6280","Type":"ContainerStarted","Data":"78ebc06f82f87acf408c5782b175e02038202680597f4e968740be0ad4b62432"} Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.430882 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b1af633-31f5-4658-bda4-fc9c010d6280","Type":"ContainerStarted","Data":"d0530b4bb88089a565eb73c29ff174432d92837c87c92e1b218d1e31d36d1ebd"} Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.452269 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.452247901 podStartE2EDuration="2.452247901s" podCreationTimestamp="2026-03-10 19:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:54.451612349 +0000 UTC m=+1518.215048349" watchObservedRunningTime="2026-03-10 19:12:54.452247901 +0000 UTC m=+1518.215683861" Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.459438 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jlr7v" podStartSLOduration=2.931465668 podStartE2EDuration="5.459419082s" podCreationTimestamp="2026-03-10 19:12:49 +0000 UTC" firstStartedPulling="2026-03-10 19:12:50.366108487 +0000 UTC m=+1514.129544447" lastFinishedPulling="2026-03-10 19:12:52.894061901 +0000 UTC m=+1516.657497861" observedRunningTime="2026-03-10 19:12:53.434625097 +0000 UTC m=+1517.198061097" watchObservedRunningTime="2026-03-10 19:12:54.459419082 +0000 UTC m=+1518.222855042" Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.812461 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.813633 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.814132 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 10 19:12:54 crc kubenswrapper[4861]: I0310 19:12:54.817819 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.439488 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.442899 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.658251 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.659909 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.678812 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.729446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdvpk\" (UniqueName: \"kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.729859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.729892 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.729919 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.729971 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.730002 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831247 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdvpk\" (UniqueName: \"kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831377 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831429 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.831460 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.832262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.832340 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.832476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.832576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.832576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.848669 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdvpk\" (UniqueName: \"kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk\") pod \"dnsmasq-dns-7749c44969-9z2z2\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:55 crc kubenswrapper[4861]: I0310 19:12:55.989345 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:56 crc kubenswrapper[4861]: W0310 19:12:56.511221 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187a3484_7a9d_499a_91d0_1867ed682d05.slice/crio-3375b778531da4bc5dd9e00f00bc2e67e75082b6e8c434746d0eb70335e35a49 WatchSource:0}: Error finding container 3375b778531da4bc5dd9e00f00bc2e67e75082b6e8c434746d0eb70335e35a49: Status 404 returned error can't find the container with id 3375b778531da4bc5dd9e00f00bc2e67e75082b6e8c434746d0eb70335e35a49 Mar 10 19:12:56 crc kubenswrapper[4861]: I0310 19:12:56.512577 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:12:57 crc kubenswrapper[4861]: I0310 19:12:57.469898 4861 generic.go:334] "Generic (PLEG): container finished" podID="187a3484-7a9d-499a-91d0-1867ed682d05" containerID="a2e6bbdd67e19fe250227c0eadee02f0b10d11543800ab23539751037ecb1973" exitCode=0 Mar 10 19:12:57 crc kubenswrapper[4861]: I0310 19:12:57.470195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" event={"ID":"187a3484-7a9d-499a-91d0-1867ed682d05","Type":"ContainerDied","Data":"a2e6bbdd67e19fe250227c0eadee02f0b10d11543800ab23539751037ecb1973"} Mar 10 19:12:57 crc kubenswrapper[4861]: I0310 19:12:57.471831 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" event={"ID":"187a3484-7a9d-499a-91d0-1867ed682d05","Type":"ContainerStarted","Data":"3375b778531da4bc5dd9e00f00bc2e67e75082b6e8c434746d0eb70335e35a49"} Mar 10 19:12:57 crc kubenswrapper[4861]: I0310 19:12:57.824805 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.074867 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.485236 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" event={"ID":"187a3484-7a9d-499a-91d0-1867ed682d05","Type":"ContainerStarted","Data":"ca5fa990ae2a8c88edbef03bdd1903c7407d516f5028387bb035e3115a89eb99"} Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.485619 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-log" containerID="cri-o://cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb" gracePeriod=30 Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.485659 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-api" containerID="cri-o://78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d" gracePeriod=30 Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.485913 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:12:58 crc kubenswrapper[4861]: I0310 19:12:58.520432 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" podStartSLOduration=3.52041116 podStartE2EDuration="3.52041116s" podCreationTimestamp="2026-03-10 19:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:12:58.507000517 +0000 UTC m=+1522.270436547" watchObservedRunningTime="2026-03-10 19:12:58.52041116 +0000 UTC m=+1522.283847120" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.481012 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.481394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.496783 4861 generic.go:334] "Generic (PLEG): container finished" podID="165e122d-1525-44d5-adcf-28470f33e74d" containerID="cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb" exitCode=143 Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.496857 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerDied","Data":"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb"} Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.545607 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.615438 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.638329 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.638653 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-central-agent" containerID="cri-o://04ce0706c6df9f9f618c249d98f5c441eb8d9452970f2233d8201342509548ad" gracePeriod=30 Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.639247 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="proxy-httpd" containerID="cri-o://2873c36f54704a23a3c98319164da99274e57bcb8f5d04bfec19c8a4567f332d" gracePeriod=30 Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.639293 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-notification-agent" containerID="cri-o://49fe551f811518da53606cf89dd2db72b7f22f467a852d6c1dc6cddda9fb1eec" gracePeriod=30 Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.639445 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="sg-core" containerID="cri-o://2696f61078f0a1a74b63a575d712dd26e91f63e6a86f67da6deadc2dfdfe94cc" gracePeriod=30 Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.655957 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.207:3000/\": EOF" Mar 10 19:12:59 crc kubenswrapper[4861]: I0310 19:12:59.775889 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507250 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerID="2873c36f54704a23a3c98319164da99274e57bcb8f5d04bfec19c8a4567f332d" exitCode=0 Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507287 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerID="2696f61078f0a1a74b63a575d712dd26e91f63e6a86f67da6deadc2dfdfe94cc" exitCode=2 Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507301 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerID="49fe551f811518da53606cf89dd2db72b7f22f467a852d6c1dc6cddda9fb1eec" exitCode=0 Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507312 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerID="04ce0706c6df9f9f618c249d98f5c441eb8d9452970f2233d8201342509548ad" exitCode=0 Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507321 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerDied","Data":"2873c36f54704a23a3c98319164da99274e57bcb8f5d04bfec19c8a4567f332d"} Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507414 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerDied","Data":"2696f61078f0a1a74b63a575d712dd26e91f63e6a86f67da6deadc2dfdfe94cc"} Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507435 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerDied","Data":"49fe551f811518da53606cf89dd2db72b7f22f467a852d6c1dc6cddda9fb1eec"} Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507485 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerDied","Data":"04ce0706c6df9f9f618c249d98f5c441eb8d9452970f2233d8201342509548ad"} Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f40dd98-6e81-4fc3-9118-5a9aa1befc14","Type":"ContainerDied","Data":"2837d16a1c91e589a40aeb38f557c9cd48e281a43d2539816c2df2f827abf420"} Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.507521 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2837d16a1c91e589a40aeb38f557c9cd48e281a43d2539816c2df2f827abf420" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.550949 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640233 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640379 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640415 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640442 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640485 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640540 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qbz\" (UniqueName: \"kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.640570 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs\") pod \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\" (UID: \"8f40dd98-6e81-4fc3-9118-5a9aa1befc14\") " Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.642702 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.642789 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.647570 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts" (OuterVolumeSpecName: "scripts") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.648392 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz" (OuterVolumeSpecName: "kube-api-access-j2qbz") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "kube-api-access-j2qbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.676271 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.688295 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743309 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743336 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743345 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743354 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743362 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2qbz\" (UniqueName: \"kubernetes.io/projected/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-kube-api-access-j2qbz\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.743372 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.764056 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.771888 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data" (OuterVolumeSpecName: "config-data") pod "8f40dd98-6e81-4fc3-9118-5a9aa1befc14" (UID: "8f40dd98-6e81-4fc3-9118-5a9aa1befc14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.845162 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:00 crc kubenswrapper[4861]: I0310 19:13:00.845202 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f40dd98-6e81-4fc3-9118-5a9aa1befc14-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.516666 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.516841 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jlr7v" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="registry-server" containerID="cri-o://671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0" gracePeriod=2 Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.558485 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.577399 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.593277 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:13:01 crc kubenswrapper[4861]: E0310 19:13:01.593791 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-central-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.593813 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-central-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: E0310 19:13:01.593826 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="sg-core" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.593835 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="sg-core" Mar 10 19:13:01 crc kubenswrapper[4861]: E0310 19:13:01.593865 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="proxy-httpd" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.593875 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="proxy-httpd" Mar 10 19:13:01 crc kubenswrapper[4861]: E0310 19:13:01.593898 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-notification-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.593906 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-notification-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.594160 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="proxy-httpd" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.594175 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-central-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.594194 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="ceilometer-notification-agent" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.594212 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" containerName="sg-core" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.596573 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.603981 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.604473 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.605956 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.617373 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.770258 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.771144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.771339 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.771392 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.771461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.771491 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqlhj\" (UniqueName: \"kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.772613 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.772691 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874141 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874393 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874462 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874489 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874601 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874628 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.874650 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlhj\" (UniqueName: \"kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.876429 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.877341 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.880866 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.885003 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.885863 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.888399 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.897413 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:01 crc kubenswrapper[4861]: I0310 19:13:01.898935 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlhj\" (UniqueName: \"kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj\") pod \"ceilometer-0\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " pod="openstack/ceilometer-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.101100 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.109391 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.115474 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs\") pod \"165e122d-1525-44d5-adcf-28470f33e74d\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283701 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data\") pod \"165e122d-1525-44d5-adcf-28470f33e74d\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283736 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities\") pod \"7025462d-4b03-4883-9b6f-73874a4760b6\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283777 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle\") pod \"165e122d-1525-44d5-adcf-28470f33e74d\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283817 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7wkg\" (UniqueName: \"kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg\") pod \"7025462d-4b03-4883-9b6f-73874a4760b6\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283931 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content\") pod \"7025462d-4b03-4883-9b6f-73874a4760b6\" (UID: \"7025462d-4b03-4883-9b6f-73874a4760b6\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.283966 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8pg5\" (UniqueName: \"kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5\") pod \"165e122d-1525-44d5-adcf-28470f33e74d\" (UID: \"165e122d-1525-44d5-adcf-28470f33e74d\") " Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.285872 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs" (OuterVolumeSpecName: "logs") pod "165e122d-1525-44d5-adcf-28470f33e74d" (UID: "165e122d-1525-44d5-adcf-28470f33e74d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.286051 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities" (OuterVolumeSpecName: "utilities") pod "7025462d-4b03-4883-9b6f-73874a4760b6" (UID: "7025462d-4b03-4883-9b6f-73874a4760b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.287536 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5" (OuterVolumeSpecName: "kube-api-access-j8pg5") pod "165e122d-1525-44d5-adcf-28470f33e74d" (UID: "165e122d-1525-44d5-adcf-28470f33e74d"). InnerVolumeSpecName "kube-api-access-j8pg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.289834 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg" (OuterVolumeSpecName: "kube-api-access-q7wkg") pod "7025462d-4b03-4883-9b6f-73874a4760b6" (UID: "7025462d-4b03-4883-9b6f-73874a4760b6"). InnerVolumeSpecName "kube-api-access-q7wkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.316800 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data" (OuterVolumeSpecName: "config-data") pod "165e122d-1525-44d5-adcf-28470f33e74d" (UID: "165e122d-1525-44d5-adcf-28470f33e74d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.319414 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "165e122d-1525-44d5-adcf-28470f33e74d" (UID: "165e122d-1525-44d5-adcf-28470f33e74d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.376189 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7025462d-4b03-4883-9b6f-73874a4760b6" (UID: "7025462d-4b03-4883-9b6f-73874a4760b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385759 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385800 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8pg5\" (UniqueName: \"kubernetes.io/projected/165e122d-1525-44d5-adcf-28470f33e74d-kube-api-access-j8pg5\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385815 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165e122d-1525-44d5-adcf-28470f33e74d-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385827 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385839 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7025462d-4b03-4883-9b6f-73874a4760b6-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385850 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165e122d-1525-44d5-adcf-28470f33e74d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.385862 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7wkg\" (UniqueName: \"kubernetes.io/projected/7025462d-4b03-4883-9b6f-73874a4760b6-kube-api-access-q7wkg\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.533256 4861 generic.go:334] "Generic (PLEG): container finished" podID="7025462d-4b03-4883-9b6f-73874a4760b6" containerID="671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0" exitCode=0 Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.533382 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jlr7v" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.534471 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerDied","Data":"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0"} Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.534549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jlr7v" event={"ID":"7025462d-4b03-4883-9b6f-73874a4760b6","Type":"ContainerDied","Data":"ff4d112d59d7807f7a50a1175b14218643d05c0be7353ff9e0908c3455ad1783"} Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.534574 4861 scope.go:117] "RemoveContainer" containerID="671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.536723 4861 generic.go:334] "Generic (PLEG): container finished" podID="165e122d-1525-44d5-adcf-28470f33e74d" containerID="78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d" exitCode=0 Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.536760 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerDied","Data":"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d"} Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.536786 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"165e122d-1525-44d5-adcf-28470f33e74d","Type":"ContainerDied","Data":"0dfe1f64bd6d32ecc78ab6a6dffa5af35cc3ff06a28751260057ae44d69b811e"} Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.536849 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.565684 4861 scope.go:117] "RemoveContainer" containerID="6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.571739 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.591102 4861 scope.go:117] "RemoveContainer" containerID="6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.595080 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.606759 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.615600 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jlr7v"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.615971 4861 scope.go:117] "RemoveContainer" containerID="671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.620155 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0\": container with ID starting with 671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0 not found: ID does not exist" containerID="671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.620197 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0"} err="failed to get container status \"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0\": rpc error: code = NotFound desc = could not find container \"671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0\": container with ID starting with 671268944f4c90b16922b1d7a33181fe7fbafd6a4e7676f5d7a3ce1ea06cccb0 not found: ID does not exist" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.620225 4861 scope.go:117] "RemoveContainer" containerID="6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.621433 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e\": container with ID starting with 6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e not found: ID does not exist" containerID="6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.621465 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e"} err="failed to get container status \"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e\": rpc error: code = NotFound desc = could not find container \"6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e\": container with ID starting with 6f716abeb8ba3217a823d22096e3329f5d993ab3d1ab6ea257146c2776d6fd1e not found: ID does not exist" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.621486 4861 scope.go:117] "RemoveContainer" containerID="6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.622052 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea\": container with ID starting with 6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea not found: ID does not exist" containerID="6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.622074 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea"} err="failed to get container status \"6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea\": rpc error: code = NotFound desc = could not find container \"6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea\": container with ID starting with 6fe88dfbf4f8485b2a135240ca91cba6975d364133d39a090593fe1eb6fe53ea not found: ID does not exist" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.622087 4861 scope.go:117] "RemoveContainer" containerID="78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639040 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.639512 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="extract-content" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639527 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="extract-content" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.639546 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="extract-utilities" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639555 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="extract-utilities" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.639575 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-api" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639583 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-api" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.639605 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="registry-server" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639613 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="registry-server" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.639640 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-log" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639648 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-log" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639926 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" containerName="registry-server" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639956 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-log" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.639973 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="165e122d-1525-44d5-adcf-28470f33e74d" containerName="nova-api-api" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.641792 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.644372 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.645287 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.645553 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.648375 4861 scope.go:117] "RemoveContainer" containerID="cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.649012 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.675111 4861 scope.go:117] "RemoveContainer" containerID="78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.675624 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d\": container with ID starting with 78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d not found: ID does not exist" containerID="78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.675664 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d"} err="failed to get container status \"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d\": rpc error: code = NotFound desc = could not find container \"78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d\": container with ID starting with 78ba7cae4a264b0a79c723a588af1d2a4be503b4df2566f8e64db68218fc8c7d not found: ID does not exist" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.675691 4861 scope.go:117] "RemoveContainer" containerID="cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb" Mar 10 19:13:02 crc kubenswrapper[4861]: E0310 19:13:02.676145 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb\": container with ID starting with cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb not found: ID does not exist" containerID="cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.676244 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb"} err="failed to get container status \"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb\": rpc error: code = NotFound desc = could not find container \"cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb\": container with ID starting with cf57dedf399da9513eda59adbb9c5878f7057c778c480ef98aa433f4544ebcbb not found: ID does not exist" Mar 10 19:13:02 crc kubenswrapper[4861]: W0310 19:13:02.677948 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda31600f9_88a4_4ecb_8da3_84c966bf4a63.slice/crio-61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c WatchSource:0}: Error finding container 61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c: Status 404 returned error can't find the container with id 61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.681078 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.792859 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.793152 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.793260 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.793366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.793525 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qjw\" (UniqueName: \"kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.793636 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.825013 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.844775 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.894991 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.895295 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qjw\" (UniqueName: \"kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.895483 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.895623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.895800 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.895896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.896006 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.901232 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.901360 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.902149 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.907362 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.915229 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qjw\" (UniqueName: \"kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw\") pod \"nova-api-0\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.955568 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.968533 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165e122d-1525-44d5-adcf-28470f33e74d" path="/var/lib/kubelet/pods/165e122d-1525-44d5-adcf-28470f33e74d/volumes" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.969429 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7025462d-4b03-4883-9b6f-73874a4760b6" path="/var/lib/kubelet/pods/7025462d-4b03-4883-9b6f-73874a4760b6/volumes" Mar 10 19:13:02 crc kubenswrapper[4861]: I0310 19:13:02.970078 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f40dd98-6e81-4fc3-9118-5a9aa1befc14" path="/var/lib/kubelet/pods/8f40dd98-6e81-4fc3-9118-5a9aa1befc14/volumes" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.246681 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.546196 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerStarted","Data":"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6"} Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.546475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerStarted","Data":"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc"} Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.546486 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerStarted","Data":"c97385981ce1fd2f4a2f2f9ff148f5a73cccbb2c6c8bd755413885f1e4523815"} Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.549411 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerStarted","Data":"6c09f38909e1c2592c7435cdb6107fe20a357f1ed362d43e8283dd3143d20885"} Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.549450 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerStarted","Data":"61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c"} Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.567749 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.581460 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.5814304369999999 podStartE2EDuration="1.581430437s" podCreationTimestamp="2026-03-10 19:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:13:03.568934251 +0000 UTC m=+1527.332370231" watchObservedRunningTime="2026-03-10 19:13:03.581430437 +0000 UTC m=+1527.344866427" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.739914 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pzt7q"] Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.741317 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.743743 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.743848 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.772761 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pzt7q"] Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.919905 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.919969 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.920026 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:03 crc kubenswrapper[4861]: I0310 19:13:03.920083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n9f\" (UniqueName: \"kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.021501 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.021877 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.021970 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5n9f\" (UniqueName: \"kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.022351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.025436 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.025552 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.041402 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.051671 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5n9f\" (UniqueName: \"kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f\") pod \"nova-cell1-cell-mapping-pzt7q\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.063153 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.559968 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerStarted","Data":"20d0f039122c6a0d6f5c8792e4c4a4a041ef8e480ba4dd21d575218242541879"} Mar 10 19:13:04 crc kubenswrapper[4861]: I0310 19:13:04.618180 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pzt7q"] Mar 10 19:13:05 crc kubenswrapper[4861]: I0310 19:13:05.574216 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pzt7q" event={"ID":"e37688d9-015d-40a4-b3f3-606e4d4bdff4","Type":"ContainerStarted","Data":"eab553f654758da04924379d977d326b7bdacacc8303db5c759adcd672af64ab"} Mar 10 19:13:05 crc kubenswrapper[4861]: I0310 19:13:05.574549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pzt7q" event={"ID":"e37688d9-015d-40a4-b3f3-606e4d4bdff4","Type":"ContainerStarted","Data":"09c1568831336205421beeb2d45c8591c3105250ca1cae1072023ca0c3f4188e"} Mar 10 19:13:05 crc kubenswrapper[4861]: I0310 19:13:05.577893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerStarted","Data":"4fdb51d57efd04be1d17c090ac4d170f60279b658b8137c8e7edb527c392d4ce"} Mar 10 19:13:05 crc kubenswrapper[4861]: I0310 19:13:05.605614 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pzt7q" podStartSLOduration=2.605594826 podStartE2EDuration="2.605594826s" podCreationTimestamp="2026-03-10 19:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:13:05.597366156 +0000 UTC m=+1529.360802176" watchObservedRunningTime="2026-03-10 19:13:05.605594826 +0000 UTC m=+1529.369030796" Mar 10 19:13:05 crc kubenswrapper[4861]: I0310 19:13:05.990903 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.060067 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.060313 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="dnsmasq-dns" containerID="cri-o://6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa" gracePeriod=10 Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.543281 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.611024 4861 generic.go:334] "Generic (PLEG): container finished" podID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerID="6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa" exitCode=0 Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.611864 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.612286 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" event={"ID":"f078c086-e6d2-4bfe-9767-8b60c59f9c5f","Type":"ContainerDied","Data":"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa"} Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.612310 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-84fjp" event={"ID":"f078c086-e6d2-4bfe-9767-8b60c59f9c5f","Type":"ContainerDied","Data":"2e243bf9a76d5d8236aef644cf668cd3cff4e2f801d8cc8f16861181dfbae000"} Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.612325 4861 scope.go:117] "RemoveContainer" containerID="6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.635769 4861 scope.go:117] "RemoveContainer" containerID="dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.665126 4861 scope.go:117] "RemoveContainer" containerID="6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa" Mar 10 19:13:06 crc kubenswrapper[4861]: E0310 19:13:06.666108 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa\": container with ID starting with 6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa not found: ID does not exist" containerID="6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.666160 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa"} err="failed to get container status \"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa\": rpc error: code = NotFound desc = could not find container \"6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa\": container with ID starting with 6e2f8eb2e24c9390e4071bcb1959462f69335d2e3da753de557bcda491b162fa not found: ID does not exist" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.666191 4861 scope.go:117] "RemoveContainer" containerID="dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911" Mar 10 19:13:06 crc kubenswrapper[4861]: E0310 19:13:06.666647 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911\": container with ID starting with dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911 not found: ID does not exist" containerID="dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.666697 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911"} err="failed to get container status \"dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911\": rpc error: code = NotFound desc = could not find container \"dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911\": container with ID starting with dd16e17488fdebb13f3d90041235042f5406f52c6ff6ce7b29fc208dd86f2911 not found: ID does not exist" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676348 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676590 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676644 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676882 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.676949 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbh2\" (UniqueName: \"kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2\") pod \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\" (UID: \"f078c086-e6d2-4bfe-9767-8b60c59f9c5f\") " Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.685878 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2" (OuterVolumeSpecName: "kube-api-access-qlbh2") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "kube-api-access-qlbh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.724809 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config" (OuterVolumeSpecName: "config") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.730093 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.749263 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.759810 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.781879 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.781915 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.781925 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.781934 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbh2\" (UniqueName: \"kubernetes.io/projected/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-kube-api-access-qlbh2\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.781942 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.803589 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f078c086-e6d2-4bfe-9767-8b60c59f9c5f" (UID: "f078c086-e6d2-4bfe-9767-8b60c59f9c5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.885826 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f078c086-e6d2-4bfe-9767-8b60c59f9c5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.948647 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.956487 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-84fjp"] Mar 10 19:13:06 crc kubenswrapper[4861]: I0310 19:13:06.976804 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" path="/var/lib/kubelet/pods/f078c086-e6d2-4bfe-9767-8b60c59f9c5f/volumes" Mar 10 19:13:07 crc kubenswrapper[4861]: I0310 19:13:07.625168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerStarted","Data":"ae3d77023ba112fc99fe18d4758acbc6d9046be7d58409111c17e57dab6ae470"} Mar 10 19:13:07 crc kubenswrapper[4861]: I0310 19:13:07.625546 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 10 19:13:07 crc kubenswrapper[4861]: I0310 19:13:07.654137 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.454239927 podStartE2EDuration="6.654112826s" podCreationTimestamp="2026-03-10 19:13:01 +0000 UTC" firstStartedPulling="2026-03-10 19:13:02.680943056 +0000 UTC m=+1526.444379016" lastFinishedPulling="2026-03-10 19:13:06.880815965 +0000 UTC m=+1530.644251915" observedRunningTime="2026-03-10 19:13:07.643540438 +0000 UTC m=+1531.406976418" watchObservedRunningTime="2026-03-10 19:13:07.654112826 +0000 UTC m=+1531.417548786" Mar 10 19:13:09 crc kubenswrapper[4861]: I0310 19:13:09.649844 4861 generic.go:334] "Generic (PLEG): container finished" podID="e37688d9-015d-40a4-b3f3-606e4d4bdff4" containerID="eab553f654758da04924379d977d326b7bdacacc8303db5c759adcd672af64ab" exitCode=0 Mar 10 19:13:09 crc kubenswrapper[4861]: I0310 19:13:09.650193 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pzt7q" event={"ID":"e37688d9-015d-40a4-b3f3-606e4d4bdff4","Type":"ContainerDied","Data":"eab553f654758da04924379d977d326b7bdacacc8303db5c759adcd672af64ab"} Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.104815 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.295378 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5n9f\" (UniqueName: \"kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f\") pod \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.296686 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts\") pod \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.296866 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle\") pod \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.297071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data\") pod \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\" (UID: \"e37688d9-015d-40a4-b3f3-606e4d4bdff4\") " Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.302938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f" (OuterVolumeSpecName: "kube-api-access-w5n9f") pod "e37688d9-015d-40a4-b3f3-606e4d4bdff4" (UID: "e37688d9-015d-40a4-b3f3-606e4d4bdff4"). InnerVolumeSpecName "kube-api-access-w5n9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.311006 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts" (OuterVolumeSpecName: "scripts") pod "e37688d9-015d-40a4-b3f3-606e4d4bdff4" (UID: "e37688d9-015d-40a4-b3f3-606e4d4bdff4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.323847 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37688d9-015d-40a4-b3f3-606e4d4bdff4" (UID: "e37688d9-015d-40a4-b3f3-606e4d4bdff4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.333546 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data" (OuterVolumeSpecName: "config-data") pod "e37688d9-015d-40a4-b3f3-606e4d4bdff4" (UID: "e37688d9-015d-40a4-b3f3-606e4d4bdff4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.399404 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.399440 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.399449 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5n9f\" (UniqueName: \"kubernetes.io/projected/e37688d9-015d-40a4-b3f3-606e4d4bdff4-kube-api-access-w5n9f\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.399459 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e37688d9-015d-40a4-b3f3-606e4d4bdff4-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.706026 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pzt7q" event={"ID":"e37688d9-015d-40a4-b3f3-606e4d4bdff4","Type":"ContainerDied","Data":"09c1568831336205421beeb2d45c8591c3105250ca1cae1072023ca0c3f4188e"} Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.706081 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c1568831336205421beeb2d45c8591c3105250ca1cae1072023ca0c3f4188e" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.706104 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pzt7q" Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.870996 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.871275 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-log" containerID="cri-o://2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" gracePeriod=30 Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.871380 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-api" containerID="cri-o://689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" gracePeriod=30 Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.941393 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.942005 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="95db3120-a065-4f8b-8146-ab4b9f399177" containerName="nova-scheduler-scheduler" containerID="cri-o://0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4" gracePeriod=30 Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.960505 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.960819 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" containerID="cri-o://e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb" gracePeriod=30 Mar 10 19:13:11 crc kubenswrapper[4861]: I0310 19:13:11.961067 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" containerID="cri-o://c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924" gracePeriod=30 Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.464270 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621272 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621325 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621425 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621481 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621518 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.621594 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qjw\" (UniqueName: \"kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw\") pod \"b7310a13-2889-46f6-806e-8c94e4d88b18\" (UID: \"b7310a13-2889-46f6-806e-8c94e4d88b18\") " Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.626732 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs" (OuterVolumeSpecName: "logs") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.627519 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw" (OuterVolumeSpecName: "kube-api-access-m5qjw") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "kube-api-access-m5qjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.646406 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data" (OuterVolumeSpecName: "config-data") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.658054 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.684989 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.686995 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b7310a13-2889-46f6-806e-8c94e4d88b18" (UID: "b7310a13-2889-46f6-806e-8c94e4d88b18"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724054 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7310a13-2889-46f6-806e-8c94e4d88b18-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724086 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724098 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724110 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qjw\" (UniqueName: \"kubernetes.io/projected/b7310a13-2889-46f6-806e-8c94e4d88b18-kube-api-access-m5qjw\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724122 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724134 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7310a13-2889-46f6-806e-8c94e4d88b18-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724500 4861 generic.go:334] "Generic (PLEG): container finished" podID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerID="689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" exitCode=0 Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724525 4861 generic.go:334] "Generic (PLEG): container finished" podID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerID="2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" exitCode=143 Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724677 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724686 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerDied","Data":"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6"} Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724782 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerDied","Data":"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc"} Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7310a13-2889-46f6-806e-8c94e4d88b18","Type":"ContainerDied","Data":"c97385981ce1fd2f4a2f2f9ff148f5a73cccbb2c6c8bd755413885f1e4523815"} Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.724823 4861 scope.go:117] "RemoveContainer" containerID="689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.727323 4861 generic.go:334] "Generic (PLEG): container finished" podID="c57556ab-4562-4e64-a986-9564f3bd682b" containerID="e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb" exitCode=143 Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.727368 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerDied","Data":"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb"} Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.806220 4861 scope.go:117] "RemoveContainer" containerID="2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.814629 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.825506 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.829206 4861 scope.go:117] "RemoveContainer" containerID="689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.829614 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6\": container with ID starting with 689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6 not found: ID does not exist" containerID="689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.829652 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6"} err="failed to get container status \"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6\": rpc error: code = NotFound desc = could not find container \"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6\": container with ID starting with 689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6 not found: ID does not exist" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.829677 4861 scope.go:117] "RemoveContainer" containerID="2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.832081 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc\": container with ID starting with 2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc not found: ID does not exist" containerID="2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.832144 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc"} err="failed to get container status \"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc\": rpc error: code = NotFound desc = could not find container \"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc\": container with ID starting with 2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc not found: ID does not exist" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.832196 4861 scope.go:117] "RemoveContainer" containerID="689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.832830 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6"} err="failed to get container status \"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6\": rpc error: code = NotFound desc = could not find container \"689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6\": container with ID starting with 689650e5ee68c858d56dbb7f5825566b3974b73ec52fa6f0de3f021ff4555ee6 not found: ID does not exist" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.832857 4861 scope.go:117] "RemoveContainer" containerID="2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.833172 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc"} err="failed to get container status \"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc\": rpc error: code = NotFound desc = could not find container \"2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc\": container with ID starting with 2f1f637aedfe9a29f6ab3af92997749c8717fcb5e6022368f225129be444b5bc not found: ID does not exist" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.843268 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.843859 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="init" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.843888 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="init" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.843923 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="dnsmasq-dns" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.843937 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="dnsmasq-dns" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.843961 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-log" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.843976 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-log" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.844000 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-api" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844014 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-api" Mar 10 19:13:12 crc kubenswrapper[4861]: E0310 19:13:12.844042 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37688d9-015d-40a4-b3f3-606e4d4bdff4" containerName="nova-manage" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844055 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37688d9-015d-40a4-b3f3-606e4d4bdff4" containerName="nova-manage" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844406 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-api" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844442 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37688d9-015d-40a4-b3f3-606e4d4bdff4" containerName="nova-manage" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844469 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" containerName="nova-api-log" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.844510 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f078c086-e6d2-4bfe-9767-8b60c59f9c5f" containerName="dnsmasq-dns" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.846202 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.848260 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.848530 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.848724 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.854246 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:12 crc kubenswrapper[4861]: I0310 19:13:12.968380 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7310a13-2889-46f6-806e-8c94e4d88b18" path="/var/lib/kubelet/pods/b7310a13-2889-46f6-806e-8c94e4d88b18/volumes" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031187 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031273 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031326 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmsh\" (UniqueName: \"kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031586 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031683 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.031914 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133386 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133429 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133456 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmsh\" (UniqueName: \"kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133563 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.133608 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.134652 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.138428 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.140048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.141505 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.145201 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.171750 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmsh\" (UniqueName: \"kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh\") pod \"nova-api-0\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.214176 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:13:13 crc kubenswrapper[4861]: W0310 19:13:13.730053 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod930df8f4_7ebf_4425_976f_4f52654586bb.slice/crio-31f981a27b194e4e5e581abce58d2e5a9ac328e448f106ef98ce5902d35a30b9 WatchSource:0}: Error finding container 31f981a27b194e4e5e581abce58d2e5a9ac328e448f106ef98ce5902d35a30b9: Status 404 returned error can't find the container with id 31f981a27b194e4e5e581abce58d2e5a9ac328e448f106ef98ce5902d35a30b9 Mar 10 19:13:13 crc kubenswrapper[4861]: I0310 19:13:13.732379 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.476057 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.516868 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle\") pod \"95db3120-a065-4f8b-8146-ab4b9f399177\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.517084 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data\") pod \"95db3120-a065-4f8b-8146-ab4b9f399177\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.517128 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bppn6\" (UniqueName: \"kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6\") pod \"95db3120-a065-4f8b-8146-ab4b9f399177\" (UID: \"95db3120-a065-4f8b-8146-ab4b9f399177\") " Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.526970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6" (OuterVolumeSpecName: "kube-api-access-bppn6") pod "95db3120-a065-4f8b-8146-ab4b9f399177" (UID: "95db3120-a065-4f8b-8146-ab4b9f399177"). InnerVolumeSpecName "kube-api-access-bppn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.548056 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95db3120-a065-4f8b-8146-ab4b9f399177" (UID: "95db3120-a065-4f8b-8146-ab4b9f399177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.561834 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data" (OuterVolumeSpecName: "config-data") pod "95db3120-a065-4f8b-8146-ab4b9f399177" (UID: "95db3120-a065-4f8b-8146-ab4b9f399177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.619210 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.619256 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bppn6\" (UniqueName: \"kubernetes.io/projected/95db3120-a065-4f8b-8146-ab4b9f399177-kube-api-access-bppn6\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.619275 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95db3120-a065-4f8b-8146-ab4b9f399177-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.759791 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerStarted","Data":"6b7111851cf4f58c8e796e6b95e77ca3422cc025ae5304753172c6a1675a0fd1"} Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.760824 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerStarted","Data":"6eb92aeb41c6749ec876f80e99358ba49b19a270c11ada403487be3ca0ef76c4"} Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.760970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerStarted","Data":"31f981a27b194e4e5e581abce58d2e5a9ac328e448f106ef98ce5902d35a30b9"} Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.761963 4861 generic.go:334] "Generic (PLEG): container finished" podID="95db3120-a065-4f8b-8146-ab4b9f399177" containerID="0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4" exitCode=0 Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.762061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95db3120-a065-4f8b-8146-ab4b9f399177","Type":"ContainerDied","Data":"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4"} Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.762123 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"95db3120-a065-4f8b-8146-ab4b9f399177","Type":"ContainerDied","Data":"d4249f5e72765d3294d937f63c8fa2b062f95093dce65b717446ca83d2ce4de3"} Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.762225 4861 scope.go:117] "RemoveContainer" containerID="0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.762409 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.801347 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8013318910000002 podStartE2EDuration="2.801331891s" podCreationTimestamp="2026-03-10 19:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:13:14.789742264 +0000 UTC m=+1538.553178284" watchObservedRunningTime="2026-03-10 19:13:14.801331891 +0000 UTC m=+1538.564767851" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.804891 4861 scope.go:117] "RemoveContainer" containerID="0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4" Mar 10 19:13:14 crc kubenswrapper[4861]: E0310 19:13:14.805365 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4\": container with ID starting with 0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4 not found: ID does not exist" containerID="0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.805428 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4"} err="failed to get container status \"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4\": rpc error: code = NotFound desc = could not find container \"0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4\": container with ID starting with 0a3505a9a6d736eb80c683787f502c930643f7120a6d9b260f8f575a42c1faa4 not found: ID does not exist" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.847003 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.865466 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.880047 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:14 crc kubenswrapper[4861]: E0310 19:13:14.884270 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95db3120-a065-4f8b-8146-ab4b9f399177" containerName="nova-scheduler-scheduler" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.884300 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95db3120-a065-4f8b-8146-ab4b9f399177" containerName="nova-scheduler-scheduler" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.884867 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="95db3120-a065-4f8b-8146-ab4b9f399177" containerName="nova-scheduler-scheduler" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.885682 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.888291 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.889506 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.929820 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.929894 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.929948 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsk4s\" (UniqueName: \"kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:14 crc kubenswrapper[4861]: I0310 19:13:14.979749 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95db3120-a065-4f8b-8146-ab4b9f399177" path="/var/lib/kubelet/pods/95db3120-a065-4f8b-8146-ab4b9f399177/volumes" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.032080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.032155 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.032185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsk4s\" (UniqueName: \"kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.039335 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.039618 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.049430 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsk4s\" (UniqueName: \"kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s\") pod \"nova-scheduler-0\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.203931 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.403338 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": dial tcp 10.217.0.203:8775: connect: connection refused" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.403519 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": dial tcp 10.217.0.203:8775: connect: connection refused" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.686336 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.689502 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snq7k\" (UniqueName: \"kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k\") pod \"c57556ab-4562-4e64-a986-9564f3bd682b\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.689763 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle\") pod \"c57556ab-4562-4e64-a986-9564f3bd682b\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.689848 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data\") pod \"c57556ab-4562-4e64-a986-9564f3bd682b\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.690026 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs\") pod \"c57556ab-4562-4e64-a986-9564f3bd682b\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.690114 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs\") pod \"c57556ab-4562-4e64-a986-9564f3bd682b\" (UID: \"c57556ab-4562-4e64-a986-9564f3bd682b\") " Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.692246 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs" (OuterVolumeSpecName: "logs") pod "c57556ab-4562-4e64-a986-9564f3bd682b" (UID: "c57556ab-4562-4e64-a986-9564f3bd682b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.694639 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k" (OuterVolumeSpecName: "kube-api-access-snq7k") pod "c57556ab-4562-4e64-a986-9564f3bd682b" (UID: "c57556ab-4562-4e64-a986-9564f3bd682b"). InnerVolumeSpecName "kube-api-access-snq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.728070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c57556ab-4562-4e64-a986-9564f3bd682b" (UID: "c57556ab-4562-4e64-a986-9564f3bd682b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.751270 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data" (OuterVolumeSpecName: "config-data") pod "c57556ab-4562-4e64-a986-9564f3bd682b" (UID: "c57556ab-4562-4e64-a986-9564f3bd682b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.775947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c57556ab-4562-4e64-a986-9564f3bd682b" (UID: "c57556ab-4562-4e64-a986-9564f3bd682b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.789587 4861 generic.go:334] "Generic (PLEG): container finished" podID="c57556ab-4562-4e64-a986-9564f3bd682b" containerID="c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924" exitCode=0 Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.789916 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.789814 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerDied","Data":"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924"} Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.790071 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57556ab-4562-4e64-a986-9564f3bd682b","Type":"ContainerDied","Data":"93e2bc0d1cfc328267574eb81002520766140adae1be3a7cd655ae90f174ccb3"} Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.790093 4861 scope.go:117] "RemoveContainer" containerID="c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.792100 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57556ab-4562-4e64-a986-9564f3bd682b-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.792584 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.792648 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snq7k\" (UniqueName: \"kubernetes.io/projected/c57556ab-4562-4e64-a986-9564f3bd682b-kube-api-access-snq7k\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.792925 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.793010 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57556ab-4562-4e64-a986-9564f3bd682b-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.801098 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.825384 4861 scope.go:117] "RemoveContainer" containerID="e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.837836 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.858758 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.862513 4861 scope.go:117] "RemoveContainer" containerID="c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924" Mar 10 19:13:15 crc kubenswrapper[4861]: E0310 19:13:15.863573 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924\": container with ID starting with c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924 not found: ID does not exist" containerID="c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.863618 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924"} err="failed to get container status \"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924\": rpc error: code = NotFound desc = could not find container \"c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924\": container with ID starting with c3114c3af9d0ce96583963c7530b67958e0f4177469f20af797c79f0aa0bd924 not found: ID does not exist" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.863641 4861 scope.go:117] "RemoveContainer" containerID="e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb" Mar 10 19:13:15 crc kubenswrapper[4861]: E0310 19:13:15.864047 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb\": container with ID starting with e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb not found: ID does not exist" containerID="e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.864087 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb"} err="failed to get container status \"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb\": rpc error: code = NotFound desc = could not find container \"e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb\": container with ID starting with e7de623e1348c7c6e4278b5fd10022b83192e45f23c9d85d37f6c143b2feaddb not found: ID does not exist" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.874057 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:15 crc kubenswrapper[4861]: E0310 19:13:15.874478 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.874499 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" Mar 10 19:13:15 crc kubenswrapper[4861]: E0310 19:13:15.874543 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.874596 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.874997 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-metadata" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.875026 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" containerName="nova-metadata-log" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.876926 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.882266 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.882474 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.886672 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.896748 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.896797 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.896838 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.896903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rgh\" (UniqueName: \"kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.897006 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.999026 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rgh\" (UniqueName: \"kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.999258 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.999463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.999518 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:15 crc kubenswrapper[4861]: I0310 19:13:15.999614 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.000104 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.003747 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.003824 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.005077 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.017222 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rgh\" (UniqueName: \"kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh\") pod \"nova-metadata-0\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.200340 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.673986 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.804208 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36bd8cd3-7b2c-45fb-b171-aa2884df4e98","Type":"ContainerStarted","Data":"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7"} Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.804875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36bd8cd3-7b2c-45fb-b171-aa2884df4e98","Type":"ContainerStarted","Data":"f9182397e118c17ba61bc1aedc5bdb71216d24446fc514c6ad177ac1a336b884"} Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.809872 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerStarted","Data":"b40bb1f19fd687f7b78f82784aff6102abc02b389ec7d9052a7fd91d08f29c67"} Mar 10 19:13:16 crc kubenswrapper[4861]: I0310 19:13:16.833306 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.833290044 podStartE2EDuration="2.833290044s" podCreationTimestamp="2026-03-10 19:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:13:16.825583943 +0000 UTC m=+1540.589019933" watchObservedRunningTime="2026-03-10 19:13:16.833290044 +0000 UTC m=+1540.596726004" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.003385 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57556ab-4562-4e64-a986-9564f3bd682b" path="/var/lib/kubelet/pods/c57556ab-4562-4e64-a986-9564f3bd682b/volumes" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.610633 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.622011 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.630049 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.633687 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.633899 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwj6\" (UniqueName: \"kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.634172 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.736144 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwj6\" (UniqueName: \"kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.736290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.736349 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.736921 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.736993 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.767333 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwj6\" (UniqueName: \"kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6\") pod \"redhat-operators-rjzth\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.822637 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerStarted","Data":"2979a50f5f190d0793c2fc18e07aa06ac371e089c3304abe6dc08d185e172174"} Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.822725 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerStarted","Data":"6d5f161e1ebe942db8f22888f7d4ee605ae096c419576acb0dffd5c4a5831534"} Mar 10 19:13:17 crc kubenswrapper[4861]: I0310 19:13:17.847621 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.847602824 podStartE2EDuration="2.847602824s" podCreationTimestamp="2026-03-10 19:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 19:13:17.845215907 +0000 UTC m=+1541.608651907" watchObservedRunningTime="2026-03-10 19:13:17.847602824 +0000 UTC m=+1541.611038794" Mar 10 19:13:18 crc kubenswrapper[4861]: I0310 19:13:18.002929 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:18 crc kubenswrapper[4861]: I0310 19:13:18.487342 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:18 crc kubenswrapper[4861]: I0310 19:13:18.832627 4861 generic.go:334] "Generic (PLEG): container finished" podID="df438641-7c78-448e-8583-2f95268d238e" containerID="e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078" exitCode=0 Mar 10 19:13:18 crc kubenswrapper[4861]: I0310 19:13:18.832681 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerDied","Data":"e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078"} Mar 10 19:13:18 crc kubenswrapper[4861]: I0310 19:13:18.832976 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerStarted","Data":"aa2f3be8b0b31eb7f8ceff58bf478aba4ab2d727fc25c6c69094ce1df90d865e"} Mar 10 19:13:19 crc kubenswrapper[4861]: I0310 19:13:19.848026 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerStarted","Data":"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0"} Mar 10 19:13:20 crc kubenswrapper[4861]: I0310 19:13:20.205691 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 10 19:13:20 crc kubenswrapper[4861]: I0310 19:13:20.876677 4861 generic.go:334] "Generic (PLEG): container finished" podID="df438641-7c78-448e-8583-2f95268d238e" containerID="fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0" exitCode=0 Mar 10 19:13:20 crc kubenswrapper[4861]: I0310 19:13:20.876812 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerDied","Data":"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0"} Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.201833 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.203204 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.895021 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerStarted","Data":"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524"} Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.921101 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjzth" podStartSLOduration=2.40815275 podStartE2EDuration="4.921070327s" podCreationTimestamp="2026-03-10 19:13:17 +0000 UTC" firstStartedPulling="2026-03-10 19:13:18.834240898 +0000 UTC m=+1542.597676858" lastFinishedPulling="2026-03-10 19:13:21.347158445 +0000 UTC m=+1545.110594435" observedRunningTime="2026-03-10 19:13:21.910926508 +0000 UTC m=+1545.674362498" watchObservedRunningTime="2026-03-10 19:13:21.921070327 +0000 UTC m=+1545.684506327" Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.992180 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:13:21 crc kubenswrapper[4861]: I0310 19:13:21.992589 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:13:23 crc kubenswrapper[4861]: I0310 19:13:23.215034 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:13:23 crc kubenswrapper[4861]: I0310 19:13:23.215088 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 10 19:13:24 crc kubenswrapper[4861]: I0310 19:13:24.230879 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:13:24 crc kubenswrapper[4861]: I0310 19:13:24.230925 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:13:25 crc kubenswrapper[4861]: I0310 19:13:25.205602 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 10 19:13:25 crc kubenswrapper[4861]: I0310 19:13:25.249604 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 10 19:13:25 crc kubenswrapper[4861]: I0310 19:13:25.980861 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 10 19:13:26 crc kubenswrapper[4861]: I0310 19:13:26.200598 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 10 19:13:26 crc kubenswrapper[4861]: I0310 19:13:26.201598 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 10 19:13:27 crc kubenswrapper[4861]: I0310 19:13:27.212150 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:13:27 crc kubenswrapper[4861]: I0310 19:13:27.212272 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 10 19:13:28 crc kubenswrapper[4861]: I0310 19:13:28.004015 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:28 crc kubenswrapper[4861]: I0310 19:13:28.004316 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:29 crc kubenswrapper[4861]: I0310 19:13:29.072967 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rjzth" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="registry-server" probeResult="failure" output=< Mar 10 19:13:29 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 19:13:29 crc kubenswrapper[4861]: > Mar 10 19:13:32 crc kubenswrapper[4861]: I0310 19:13:32.141400 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.224648 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.224817 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.225511 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.225581 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.232072 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 10 19:13:33 crc kubenswrapper[4861]: I0310 19:13:33.234253 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 10 19:13:36 crc kubenswrapper[4861]: I0310 19:13:36.208590 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 10 19:13:36 crc kubenswrapper[4861]: I0310 19:13:36.211074 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 10 19:13:36 crc kubenswrapper[4861]: I0310 19:13:36.221171 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 10 19:13:37 crc kubenswrapper[4861]: I0310 19:13:37.093141 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 10 19:13:38 crc kubenswrapper[4861]: I0310 19:13:38.097843 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:38 crc kubenswrapper[4861]: I0310 19:13:38.188373 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:38 crc kubenswrapper[4861]: I0310 19:13:38.349123 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.126606 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjzth" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="registry-server" containerID="cri-o://17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524" gracePeriod=2 Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.778005 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.841595 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwj6\" (UniqueName: \"kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6\") pod \"df438641-7c78-448e-8583-2f95268d238e\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.842094 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content\") pod \"df438641-7c78-448e-8583-2f95268d238e\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.842187 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities\") pod \"df438641-7c78-448e-8583-2f95268d238e\" (UID: \"df438641-7c78-448e-8583-2f95268d238e\") " Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.842967 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities" (OuterVolumeSpecName: "utilities") pod "df438641-7c78-448e-8583-2f95268d238e" (UID: "df438641-7c78-448e-8583-2f95268d238e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.848645 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6" (OuterVolumeSpecName: "kube-api-access-scwj6") pod "df438641-7c78-448e-8583-2f95268d238e" (UID: "df438641-7c78-448e-8583-2f95268d238e"). InnerVolumeSpecName "kube-api-access-scwj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.944873 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwj6\" (UniqueName: \"kubernetes.io/projected/df438641-7c78-448e-8583-2f95268d238e-kube-api-access-scwj6\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.944914 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:40 crc kubenswrapper[4861]: I0310 19:13:40.993315 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df438641-7c78-448e-8583-2f95268d238e" (UID: "df438641-7c78-448e-8583-2f95268d238e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.049195 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df438641-7c78-448e-8583-2f95268d238e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.146058 4861 generic.go:334] "Generic (PLEG): container finished" podID="df438641-7c78-448e-8583-2f95268d238e" containerID="17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524" exitCode=0 Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.146117 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerDied","Data":"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524"} Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.146147 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjzth" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.146178 4861 scope.go:117] "RemoveContainer" containerID="17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.146158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjzth" event={"ID":"df438641-7c78-448e-8583-2f95268d238e","Type":"ContainerDied","Data":"aa2f3be8b0b31eb7f8ceff58bf478aba4ab2d727fc25c6c69094ce1df90d865e"} Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.189747 4861 scope.go:117] "RemoveContainer" containerID="fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.204916 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.219508 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjzth"] Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.224690 4861 scope.go:117] "RemoveContainer" containerID="e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.275119 4861 scope.go:117] "RemoveContainer" containerID="17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524" Mar 10 19:13:41 crc kubenswrapper[4861]: E0310 19:13:41.275649 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524\": container with ID starting with 17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524 not found: ID does not exist" containerID="17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.275686 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524"} err="failed to get container status \"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524\": rpc error: code = NotFound desc = could not find container \"17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524\": container with ID starting with 17ebf9b9436a75e329e8ba0f7f90076487089c7b2ef8ba3d16570dcb27995524 not found: ID does not exist" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.275732 4861 scope.go:117] "RemoveContainer" containerID="fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0" Mar 10 19:13:41 crc kubenswrapper[4861]: E0310 19:13:41.276255 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0\": container with ID starting with fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0 not found: ID does not exist" containerID="fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.276299 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0"} err="failed to get container status \"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0\": rpc error: code = NotFound desc = could not find container \"fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0\": container with ID starting with fb4bfe785b4400ed799161e359fe0ef4becfc57f01dd8b3eca2484253ad3dad0 not found: ID does not exist" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.276323 4861 scope.go:117] "RemoveContainer" containerID="e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078" Mar 10 19:13:41 crc kubenswrapper[4861]: E0310 19:13:41.276782 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078\": container with ID starting with e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078 not found: ID does not exist" containerID="e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078" Mar 10 19:13:41 crc kubenswrapper[4861]: I0310 19:13:41.276848 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078"} err="failed to get container status \"e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078\": rpc error: code = NotFound desc = could not find container \"e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078\": container with ID starting with e5934546f7451d1f3f21dd2a839031a021e92a9948fbb99ffc29bf35965a9078 not found: ID does not exist" Mar 10 19:13:42 crc kubenswrapper[4861]: I0310 19:13:42.978781 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df438641-7c78-448e-8583-2f95268d238e" path="/var/lib/kubelet/pods/df438641-7c78-448e-8583-2f95268d238e/volumes" Mar 10 19:13:51 crc kubenswrapper[4861]: I0310 19:13:51.991995 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:13:51 crc kubenswrapper[4861]: I0310 19:13:51.992802 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:13:51 crc kubenswrapper[4861]: I0310 19:13:51.992863 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:13:51 crc kubenswrapper[4861]: I0310 19:13:51.994145 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:13:51 crc kubenswrapper[4861]: I0310 19:13:51.994257 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d" gracePeriod=600 Mar 10 19:13:52 crc kubenswrapper[4861]: I0310 19:13:52.308203 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d" exitCode=0 Mar 10 19:13:52 crc kubenswrapper[4861]: I0310 19:13:52.308255 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d"} Mar 10 19:13:52 crc kubenswrapper[4861]: I0310 19:13:52.308292 4861 scope.go:117] "RemoveContainer" containerID="c5cf53ff0c1076e7b20b64dca8f896382ec5b206e350d4b3aabaf2ac26200351" Mar 10 19:13:53 crc kubenswrapper[4861]: I0310 19:13:53.328083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53"} Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.275313 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.457760 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:13:56 crc kubenswrapper[4861]: E0310 19:13:56.469522 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="registry-server" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.469562 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="registry-server" Mar 10 19:13:56 crc kubenswrapper[4861]: E0310 19:13:56.469609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="extract-content" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.469617 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="extract-content" Mar 10 19:13:56 crc kubenswrapper[4861]: E0310 19:13:56.469625 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="extract-utilities" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.469631 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="extract-utilities" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.469894 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="df438641-7c78-448e-8583-2f95268d238e" containerName="registry-server" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.470546 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.479817 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.481083 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.489019 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.507310 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.507960 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.533761 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.534964 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.544974 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.555795 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.591230 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622377 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622449 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq4bp\" (UniqueName: \"kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622480 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622531 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svk56\" (UniqueName: \"kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.622573 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs7x\" (UniqueName: \"kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.661822 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-52e5-account-create-update-h7sd5"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.678414 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e6bf-account-create-update-4flvn"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.705687 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.706825 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724319 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724735 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724798 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4bp\" (UniqueName: \"kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724855 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.724917 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svk56\" (UniqueName: \"kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.725879 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e6bf-account-create-update-4flvn"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.733744 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs7x\" (UniqueName: \"kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.736847 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.737469 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.738008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.743773 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-52e5-account-create-update-h7sd5"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.781729 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.799662 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-822e-account-create-update-rcx69"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.807759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svk56\" (UniqueName: \"kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56\") pod \"barbican-e6bf-account-create-update-rgnpm\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.818181 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-822e-account-create-update-rcx69"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.824536 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs7x\" (UniqueName: \"kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x\") pod \"nova-api-52e5-account-create-update-q8wl5\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.826288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4bp\" (UniqueName: \"kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp\") pod \"root-account-create-update-lrz47\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.829512 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.830741 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.839035 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.839109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m9bt\" (UniqueName: \"kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.844016 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.850940 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.851397 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrz47" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.851635 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.851801 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" containerName="openstackclient" containerID="cri-o://94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b" gracePeriod=2 Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.882880 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.890456 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.902673 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.940770 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m956k"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.954895 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m956k"] Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.956807 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m9bt\" (UniqueName: \"kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.957244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.957270 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.957308 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhkd\" (UniqueName: \"kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:56 crc kubenswrapper[4861]: I0310 19:13:56.958304 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.001504 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0" path="/var/lib/kubelet/pods/1a8696b2-9ca9-4f41-96bd-58bcb4b74cb0/volumes" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.002168 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6982db5b-c829-4309-897e-27fd0b3f2d6f" path="/var/lib/kubelet/pods/6982db5b-c829-4309-897e-27fd0b3f2d6f/volumes" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.002850 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ec2818-0613-4fd1-8373-8c06c0b24489" path="/var/lib/kubelet/pods/a9ec2818-0613-4fd1-8373-8c06c0b24489/volumes" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.003363 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da07cfa9-9a8f-4a60-8aa5-ceba369b81d9" path="/var/lib/kubelet/pods/da07cfa9-9a8f-4a60-8aa5-ceba369b81d9/volumes" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.011737 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.012049 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" containerName="openstackclient" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.012062 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" containerName="openstackclient" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.012286 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" containerName="openstackclient" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.026035 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.026223 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.026990 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="ovn-northd" containerID="cri-o://454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.027111 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="openstack-network-exporter" containerID="cri-o://68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.029258 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m9bt\" (UniqueName: \"kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt\") pod \"nova-cell0-cf7a-account-create-update-wf5cp\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.063570 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-jgz8n"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.065575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhkd\" (UniqueName: \"kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.065652 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5tq\" (UniqueName: \"kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.065765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.065836 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.067906 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.078374 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.094257 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.101809 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.121367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhkd\" (UniqueName: \"kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd\") pod \"nova-cell1-f866-account-create-update-fl8b6\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.155495 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-jgz8n"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.235146 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.245298 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.293670 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-4d26l"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.301017 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.301385 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5tq\" (UniqueName: \"kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.308589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.316471 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-4d26l"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.328848 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7g95t"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.338660 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7g95t"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.354496 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5tq\" (UniqueName: \"kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq\") pod \"neutron-7890-account-create-update-z8mnl\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.356468 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2x6cv"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.376646 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7890-account-create-update-2c6b8"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.397765 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qw5zr"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.420074 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-j5zv5"] Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.424999 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.425055 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data podName:9fa4a97d-682a-40eb-93e0-5f5167ddb0a0 nodeName:}" failed. No retries permitted until 2026-03-10 19:13:57.925039629 +0000 UTC m=+1581.688475589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data") pod "rabbitmq-cell1-server-0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0") : configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.427075 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2x6cv"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.440878 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7890-account-create-update-2c6b8"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.452788 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-j5zv5"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.460326 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qw5zr"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.481667 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.491127 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="openstack-network-exporter" containerID="cri-o://fe5ee7032ce5d91ee136401e0e5a8ee13f7696e2e7c6cc9ccb1c1348c985d7d0" gracePeriod=300 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.496058 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.496393 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="openstack-network-exporter" containerID="cri-o://563ee7a1f5b83d7d42165f459dccecfc60f2f551d3f0e59ace85417509a8ec48" gracePeriod=300 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.505803 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.509362 4861 generic.go:334] "Generic (PLEG): container finished" podID="8c1ba054-6941-4e52-b792-250287f25d92" containerID="68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86" exitCode=2 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.509418 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerDied","Data":"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86"} Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.516590 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pzt7q"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.533871 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pzt7q"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.582451 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qnt8d"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.589417 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qnt8d"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.602510 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-jvlmr"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.605316 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="ovsdbserver-sb" containerID="cri-o://d0df70b068b7e2c89545406ff08c2cfd24ffaf8b8337d1abdd32af448ab278bc" gracePeriod=300 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.609615 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-jvlmr"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.614532 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="ovsdbserver-nb" containerID="cri-o://699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" gracePeriod=300 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.616318 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.616536 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-8h5gk" podUID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" containerName="openstack-network-exporter" containerID="cri-o://fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.628118 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.754530 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.806860 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.808238 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-server" containerID="cri-o://b3ade23dd33552771452468d0c1f332e38b2c8795245f8a42e5b5c4e2ed70338" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.809520 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-reaper" containerID="cri-o://11a99a6a7e0db5891a44034177a04d1985fa20e0f2d3290918a14ddd4959f420" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.809990 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-updater" containerID="cri-o://66a2a9c7ab44445d4eeb595e1526f88c8cdbc26a0ff3fdd9ff021d8d32a4a982" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.809976 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-server" containerID="cri-o://63a9bdbdc82026e9332fbae1efef4878f557997e81cdaf15c41418eb885ff288" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810142 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="swift-recon-cron" containerID="cri-o://2bd4ad2d926f4ab721bec00675970f1a24d0671a8e5f1570ec65b6917857aedb" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810184 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="rsync" containerID="cri-o://73f6971b51316dd9e12239ff145daa5bc5b88b45caa9a90726d3eeb744b9fcb6" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810211 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-auditor" containerID="cri-o://55fdaa1d03e36a25dbe23991f3c3f1f316d93446c624fccb0f36484e914ef862" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810226 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-expirer" containerID="cri-o://bef7cce63ec465afa0014e811d86aceb3dd6ecfcc6a1a0b0c73a257f27213042" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810254 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-replicator" containerID="cri-o://2454ccf10166b5528edb6a3e86ffa92bbe7f052583e5e86477cc4d4a7bbd47cb" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810278 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-updater" containerID="cri-o://ab8c0e84aed350a4ab30debf28ea1e963b8e3e21aaeb08b63288cae676be3729" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810300 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-replicator" containerID="cri-o://5da060e6ca296ed684fc7e74fb2eb9b5f8999f47393abd75eb45df33aa0e5f1d" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810318 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-server" containerID="cri-o://43df18a6de05237d4ac2005dc34fa3948f76b3a87364c4a8cea9cbd0359fd444" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810335 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-auditor" containerID="cri-o://f9e781e035bf250fe3ee13abaf159d7c55149ddf165cabef696cdc1f4ec625ad" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810365 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-replicator" containerID="cri-o://d2ff3b6b89cfd7c1dc305fb22ed53d7ed15f44f949992d564c643552f9f8274d" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.810404 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-auditor" containerID="cri-o://97b623396baf8ceeff4e0ed2a9e396771f9bf11b23a843a8a998619ba053f542" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.830364 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-snwft"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.847156 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-snwft"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.878484 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.882911 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="dnsmasq-dns" containerID="cri-o://ca5fa990ae2a8c88edbef03bdd1903c7407d516f5028387bb035e3115a89eb99" gracePeriod=10 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.916780 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.917072 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-64c5bb5d74-nmtmm" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-log" containerID="cri-o://22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.917487 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-64c5bb5d74-nmtmm" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-api" containerID="cri-o://e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.935838 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.938648 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.938737 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data podName:9fa4a97d-682a-40eb-93e0-5f5167ddb0a0 nodeName:}" failed. No retries permitted until 2026-03-10 19:13:58.938720553 +0000 UTC m=+1582.702156513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data") pod "rabbitmq-cell1-server-0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0") : configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.948342 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:57 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: if [ -n "" ]; then Mar 10 19:13:57 crc kubenswrapper[4861]: GRANT_DATABASE="" Mar 10 19:13:57 crc kubenswrapper[4861]: else Mar 10 19:13:57 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:57 crc kubenswrapper[4861]: fi Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:57 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:57 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:57 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:57 crc kubenswrapper[4861]: # support updates Mar 10 19:13:57 crc kubenswrapper[4861]: Mar 10 19:13:57 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:57 crc kubenswrapper[4861]: E0310 19:13:57.964022 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-lrz47" podUID="ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.985865 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.986091 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-log" containerID="cri-o://90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44" gracePeriod=30 Mar 10 19:13:57 crc kubenswrapper[4861]: I0310 19:13:57.987214 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-httpd" containerID="cri-o://0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.093768 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.094732 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="cinder-scheduler" containerID="cri-o://001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.095343 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="probe" containerID="cri-o://a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.140335 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4zsw2"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.176250 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4zsw2"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.239309 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.239686 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-log" containerID="cri-o://ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.239861 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-httpd" containerID="cri-o://439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.266985 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sl4sb"] Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.275543 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd is running failed: container process not found" containerID="699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.276392 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd is running failed: container process not found" containerID="699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.283241 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sl4sb"] Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.283961 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd is running failed: container process not found" containerID="699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.284002 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="ovsdbserver-nb" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.312290 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.312581 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api-log" containerID="cri-o://0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.313013 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api" containerID="cri-o://74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.320748 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f9e6-account-create-update-mh4cn"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.330548 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f9e6-account-create-update-mh4cn"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.340515 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.367036 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.382686 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-frs8p"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.448367 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-frs8p"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.458405 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a4bb-account-create-update-dblgj"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.475680 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a4bb-account-create-update-dblgj"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.498825 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tgzx6"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.517080 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tgzx6"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.518150 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8h5gk_d0c73a13-57f7-43aa-8e0a-ba36a3195653/openstack-network-exporter/0.log" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.518216 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.555526 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.555595 4861 generic.go:334] "Generic (PLEG): container finished" podID="187a3484-7a9d-499a-91d0-1867ed682d05" containerID="ca5fa990ae2a8c88edbef03bdd1903c7407d516f5028387bb035e3115a89eb99" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.555594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" event={"ID":"187a3484-7a9d-499a-91d0-1867ed682d05","Type":"ContainerDied","Data":"ca5fa990ae2a8c88edbef03bdd1903c7407d516f5028387bb035e3115a89eb99"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.555916 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-log" containerID="cri-o://6eb92aeb41c6749ec876f80e99358ba49b19a270c11ada403487be3ca0ef76c4" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.556425 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-api" containerID="cri-o://6b7111851cf4f58c8e796e6b95e77ca3422cc025ae5304753172c6a1675a0fd1" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.578756 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.609447 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.630777 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.631052 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69987f456f-bjbjk" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-api" containerID="cri-o://c33900d456694805e616c7853b9af4d757f2fbf528a6cc4897258f4c25f58c83" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.632751 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69987f456f-bjbjk" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-httpd" containerID="cri-o://8c0527061a420c2c42f4d25de783f4827f7ddac4b9a8e5486e1239c416673684" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.654087 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.654306 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" containerID="cri-o://a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672136 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="11a99a6a7e0db5891a44034177a04d1985fa20e0f2d3290918a14ddd4959f420" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672160 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="55fdaa1d03e36a25dbe23991f3c3f1f316d93446c624fccb0f36484e914ef862" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672168 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="2454ccf10166b5528edb6a3e86ffa92bbe7f052583e5e86477cc4d4a7bbd47cb" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672175 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="63a9bdbdc82026e9332fbae1efef4878f557997e81cdaf15c41418eb885ff288" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672180 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="73f6971b51316dd9e12239ff145daa5bc5b88b45caa9a90726d3eeb744b9fcb6" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672186 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="bef7cce63ec465afa0014e811d86aceb3dd6ecfcc6a1a0b0c73a257f27213042" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672192 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="66a2a9c7ab44445d4eeb595e1526f88c8cdbc26a0ff3fdd9ff021d8d32a4a982" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672198 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="f9e781e035bf250fe3ee13abaf159d7c55149ddf165cabef696cdc1f4ec625ad" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672205 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="5da060e6ca296ed684fc7e74fb2eb9b5f8999f47393abd75eb45df33aa0e5f1d" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672211 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="ab8c0e84aed350a4ab30debf28ea1e963b8e3e21aaeb08b63288cae676be3729" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672217 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="97b623396baf8ceeff4e0ed2a9e396771f9bf11b23a843a8a998619ba053f542" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672223 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="d2ff3b6b89cfd7c1dc305fb22ed53d7ed15f44f949992d564c643552f9f8274d" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672229 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="b3ade23dd33552771452468d0c1f332e38b2c8795245f8a42e5b5c4e2ed70338" exitCode=0 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672265 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"11a99a6a7e0db5891a44034177a04d1985fa20e0f2d3290918a14ddd4959f420"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672287 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"55fdaa1d03e36a25dbe23991f3c3f1f316d93446c624fccb0f36484e914ef862"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672300 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"2454ccf10166b5528edb6a3e86ffa92bbe7f052583e5e86477cc4d4a7bbd47cb"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"63a9bdbdc82026e9332fbae1efef4878f557997e81cdaf15c41418eb885ff288"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672316 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"73f6971b51316dd9e12239ff145daa5bc5b88b45caa9a90726d3eeb744b9fcb6"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672324 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"bef7cce63ec465afa0014e811d86aceb3dd6ecfcc6a1a0b0c73a257f27213042"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672332 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"66a2a9c7ab44445d4eeb595e1526f88c8cdbc26a0ff3fdd9ff021d8d32a4a982"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672339 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"f9e781e035bf250fe3ee13abaf159d7c55149ddf165cabef696cdc1f4ec625ad"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672547 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"5da060e6ca296ed684fc7e74fb2eb9b5f8999f47393abd75eb45df33aa0e5f1d"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672558 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"ab8c0e84aed350a4ab30debf28ea1e963b8e3e21aaeb08b63288cae676be3729"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672567 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"97b623396baf8ceeff4e0ed2a9e396771f9bf11b23a843a8a998619ba053f542"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"d2ff3b6b89cfd7c1dc305fb22ed53d7ed15f44f949992d564c643552f9f8274d"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.672583 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"b3ade23dd33552771452468d0c1f332e38b2c8795245f8a42e5b5c4e2ed70338"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.675409 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.677223 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.677274 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz75m\" (UniqueName: \"kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.677354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.677380 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.678054 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.678163 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config\") pod \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\" (UID: \"d0c73a13-57f7-43aa-8e0a-ba36a3195653\") " Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.690250 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config" (OuterVolumeSpecName: "config") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.690306 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.690646 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695032 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8h5gk_d0c73a13-57f7-43aa-8e0a-ba36a3195653/openstack-network-exporter/0.log" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695077 4861 generic.go:334] "Generic (PLEG): container finished" podID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" containerID="fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846" exitCode=2 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695216 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8h5gk" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8h5gk" event={"ID":"d0c73a13-57f7-43aa-8e0a-ba36a3195653","Type":"ContainerDied","Data":"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695898 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8h5gk" event={"ID":"d0c73a13-57f7-43aa-8e0a-ba36a3195653","Type":"ContainerDied","Data":"2be44c1ed9c4193f97620f49ef77cea816423619f970e2c9516f5443a2a25111"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.695914 4861 scope.go:117] "RemoveContainer" containerID="fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.700770 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-km8tn"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.701162 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m" (OuterVolumeSpecName: "kube-api-access-hz75m") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "kube-api-access-hz75m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.711278 4861 generic.go:334] "Generic (PLEG): container finished" podID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerID="0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.711323 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerDied","Data":"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.717881 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-km8tn"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.724604 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="rabbitmq" containerID="cri-o://32235165193d6a0485898ee070ca8f7382c523ae5f2c706e831442bec3b9e031" gracePeriod=604800 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.725952 4861 scope.go:117] "RemoveContainer" containerID="fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846" Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.733448 4861 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 10 19:13:58 crc kubenswrapper[4861]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 10 19:13:58 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNBridge=br-int Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Mar 10 19:13:58 crc kubenswrapper[4861]: ++ PhysicalNetworks= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNHostName= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 10 19:13:58 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 10 19:13:58 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + sleep 0.5 Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Mar 10 19:13:58 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 10 19:13:58 crc kubenswrapper[4861]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-cw7x8" message=< Mar 10 19:13:58 crc kubenswrapper[4861]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 10 19:13:58 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNBridge=br-int Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Mar 10 19:13:58 crc kubenswrapper[4861]: ++ PhysicalNetworks= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNHostName= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 10 19:13:58 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 10 19:13:58 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + sleep 0.5 Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Mar 10 19:13:58 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 10 19:13:58 crc kubenswrapper[4861]: > Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.733496 4861 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 10 19:13:58 crc kubenswrapper[4861]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 10 19:13:58 crc kubenswrapper[4861]: + source /usr/local/bin/container-scripts/functions Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNBridge=br-int Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNRemote=tcp:localhost:6642 Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNEncapType=geneve Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNAvailabilityZones= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ EnableChassisAsGateway=true Mar 10 19:13:58 crc kubenswrapper[4861]: ++ PhysicalNetworks= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ OVNHostName= Mar 10 19:13:58 crc kubenswrapper[4861]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 10 19:13:58 crc kubenswrapper[4861]: ++ ovs_dir=/var/lib/openvswitch Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 10 19:13:58 crc kubenswrapper[4861]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 10 19:13:58 crc kubenswrapper[4861]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + sleep 0.5 Mar 10 19:13:58 crc kubenswrapper[4861]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 10 19:13:58 crc kubenswrapper[4861]: + cleanup_ovsdb_server_semaphore Mar 10 19:13:58 crc kubenswrapper[4861]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 10 19:13:58 crc kubenswrapper[4861]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 10 19:13:58 crc kubenswrapper[4861]: > pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" containerID="cri-o://ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.733528 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" containerID="cri-o://ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.735430 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lrz47" event={"ID":"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32","Type":"ContainerStarted","Data":"0f34857216274060cf1bce993781900bce824c3dc69d0253f4e5af38eadacf60"} Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.735975 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846\": container with ID starting with fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846 not found: ID does not exist" containerID="fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.736006 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846"} err="failed to get container status \"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846\": rpc error: code = NotFound desc = could not find container \"fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846\": container with ID starting with fd5a7b828f5536384606fbd147ae01f6df41721f8f15e86eef7446e25089e846 not found: ID does not exist" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.736230 4861 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-lrz47" secret="" err="secret \"galera-openstack-cell1-dockercfg-2xqvk\" not found" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.745363 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bb836e5b-f1a1-4d7a-8de0-03cddd650c4a/ovsdbserver-sb/0.log" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.745425 4861 generic.go:334] "Generic (PLEG): container finished" podID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerID="fe5ee7032ce5d91ee136401e0e5a8ee13f7696e2e7c6cc9ccb1c1348c985d7d0" exitCode=2 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.745449 4861 generic.go:334] "Generic (PLEG): container finished" podID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerID="d0df70b068b7e2c89545406ff08c2cfd24ffaf8b8337d1abdd32af448ab278bc" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.745515 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerDied","Data":"fe5ee7032ce5d91ee136401e0e5a8ee13f7696e2e7c6cc9ccb1c1348c985d7d0"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.745554 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerDied","Data":"d0df70b068b7e2c89545406ff08c2cfd24ffaf8b8337d1abdd32af448ab278bc"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.750764 4861 generic.go:334] "Generic (PLEG): container finished" podID="9140f7c5-893a-4128-85aa-2db96537b483" containerID="22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.750808 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerDied","Data":"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.754992 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.755214 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-httpd" containerID="cri-o://985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.755568 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-server" containerID="cri-o://a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.762330 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gnw7n"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.764869 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerID="90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.764933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerDied","Data":"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.772984 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="galera" containerID="cri-o://2a8c0c46e768dafa2c342d24605ecb36c119cc70e7586a4c3f59339b29b329fe" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.773429 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" containerID="cri-o://3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" gracePeriod=29 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.777473 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_533092ca-8a4b-4005-909c-32736cde1a1e/ovsdbserver-nb/0.log" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.777516 4861 generic.go:334] "Generic (PLEG): container finished" podID="533092ca-8a4b-4005-909c-32736cde1a1e" containerID="563ee7a1f5b83d7d42165f459dccecfc60f2f551d3f0e59ace85417509a8ec48" exitCode=2 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.777530 4861 generic.go:334] "Generic (PLEG): container finished" podID="533092ca-8a4b-4005-909c-32736cde1a1e" containerID="699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.777566 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerDied","Data":"563ee7a1f5b83d7d42165f459dccecfc60f2f551d3f0e59ace85417509a8ec48"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.777590 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerDied","Data":"699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.779779 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gnw7n"] Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.784363 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:58 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: if [ -n "" ]; then Mar 10 19:13:58 crc kubenswrapper[4861]: GRANT_DATABASE="" Mar 10 19:13:58 crc kubenswrapper[4861]: else Mar 10 19:13:58 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:58 crc kubenswrapper[4861]: fi Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:58 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:58 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:58 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:58 crc kubenswrapper[4861]: # support updates Mar 10 19:13:58 crc kubenswrapper[4861]: Mar 10 19:13:58 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.786955 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-lrz47" podUID="ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.788215 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.788415 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" containerID="cri-o://6d5f161e1ebe942db8f22888f7d4ee605ae096c419576acb0dffd5c4a5831534" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.788534 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" containerID="cri-o://2979a50f5f190d0793c2fc18e07aa06ac371e089c3304abe6dc08d185e172174" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.791480 4861 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.791502 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c73a13-57f7-43aa-8e0a-ba36a3195653-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.791511 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0c73a13-57f7-43aa-8e0a-ba36a3195653-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.791566 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz75m\" (UniqueName: \"kubernetes.io/projected/d0c73a13-57f7-43aa-8e0a-ba36a3195653-kube-api-access-hz75m\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.793598 4861 generic.go:334] "Generic (PLEG): container finished" podID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerID="ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf" exitCode=143 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.793631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerDied","Data":"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf"} Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.805892 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.805956 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.815327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0c73a13-57f7-43aa-8e0a-ba36a3195653" (UID: "d0c73a13-57f7-43aa-8e0a-ba36a3195653"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.817799 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.818058 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d79dbd48c-d74zl" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker-log" containerID="cri-o://3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.818205 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d79dbd48c-d74zl" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker" containerID="cri-o://089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.822550 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.822779 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener-log" containerID="cri-o://e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.822837 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener" containerID="cri-o://25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.844833 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7ddxj"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.848890 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7ddxj"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.857314 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.870135 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.870396 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d84cf8948-mg4jb" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api-log" containerID="cri-o://4b1f746cd725920aeee99b23e4c08bf7d8c523580d3e57266ad014c8ed8e2ed0" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.870922 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d84cf8948-mg4jb" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api" containerID="cri-o://93c0c7d20bdf5ba3484b74d8194ad89a2b4dad5778c0dc5d0876a0f04995cc86" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.879138 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9lzhz"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.891842 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9lzhz"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.893343 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.893434 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0c73a13-57f7-43aa-8e0a-ba36a3195653-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.893994 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 10 19:13:58 crc kubenswrapper[4861]: E0310 19:13:58.894106 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts podName:ee15a5af-fb3a-45fc-bfa1-b9eb45418a32 nodeName:}" failed. No retries permitted until 2026-03-10 19:13:59.394090199 +0000 UTC m=+1583.157526149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts") pod "root-account-create-update-lrz47" (UID: "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32") : configmap "openstack-cell1-scripts" not found Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.907938 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.928253 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.928512 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1b1af633-31f5-4658-bda4-fc9c010d6280" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d0530b4bb88089a565eb73c29ff174432d92837c87c92e1b218d1e31d36d1ebd" gracePeriod=30 Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.940909 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:13:58 crc kubenswrapper[4861]: I0310 19:13:58.954762 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdqvp"] Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.025119 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.025190 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data podName:9fa4a97d-682a-40eb-93e0-5f5167ddb0a0 nodeName:}" failed. No retries permitted until 2026-03-10 19:14:01.025175171 +0000 UTC m=+1584.788611131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data") pod "rabbitmq-cell1-server-0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0") : configmap "rabbitmq-cell1-config-data" not found Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.035246 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="rabbitmq" containerID="cri-o://66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac" gracePeriod=604800 Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.049389 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:59 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: if [ -n "barbican" ]; then Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="barbican" Mar 10 19:13:59 crc kubenswrapper[4861]: else Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:59 crc kubenswrapper[4861]: fi Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:59 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:59 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:59 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:59 crc kubenswrapper[4861]: # support updates Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.051634 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-e6bf-account-create-update-rgnpm" podUID="87dcb8b8-2ebd-44e9-a15f-e995495d8b32" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.052645 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d24034-d5c6-486f-a637-23724e8b5225" path="/var/lib/kubelet/pods/33d24034-d5c6-486f-a637-23724e8b5225/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.055745 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e93e0a-bc82-45fa-a340-a7eb189f2657" path="/var/lib/kubelet/pods/37e93e0a-bc82-45fa-a340-a7eb189f2657/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.056553 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b" path="/var/lib/kubelet/pods/488ab37a-5f47-4ea7-a5d4-7ed0780f7d4b/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.064851 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2a596e-34ef-40d8-abd9-a8145086a6b0" path="/var/lib/kubelet/pods/4b2a596e-34ef-40d8-abd9-a8145086a6b0/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.077141 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d48de82-400d-41d1-a054-f451486e0ff5" path="/var/lib/kubelet/pods/5d48de82-400d-41d1-a054-f451486e0ff5/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.084357 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:59 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: if [ -n "nova_api" ]; then Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="nova_api" Mar 10 19:13:59 crc kubenswrapper[4861]: else Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:59 crc kubenswrapper[4861]: fi Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:59 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:59 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:59 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:59 crc kubenswrapper[4861]: # support updates Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.087573 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-52e5-account-create-update-q8wl5" podUID="1d0a3228-ad14-4cd8-8c5a-969bad245ecf" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.095465 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6369ade3-a8af-44b7-94be-736523f99512" path="/var/lib/kubelet/pods/6369ade3-a8af-44b7-94be-736523f99512/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.096781 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d6fcdc-e673-4207-8da8-3c3f3681bcaf" path="/var/lib/kubelet/pods/63d6fcdc-e673-4207-8da8-3c3f3681bcaf/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.097825 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa1372d-7e37-4d43-a96c-08fce6a5eaa1" path="/var/lib/kubelet/pods/6aa1372d-7e37-4d43-a96c-08fce6a5eaa1/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.098980 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6cb862-e812-41b0-bf94-1c3b5ebb51a4" path="/var/lib/kubelet/pods/6f6cb862-e812-41b0-bf94-1c3b5ebb51a4/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.099574 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7068adc7-5930-4999-99ec-eb8ced501cd2" path="/var/lib/kubelet/pods/7068adc7-5930-4999-99ec-eb8ced501cd2/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.106350 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e051309-5b38-4162-915b-7591f96ccddf" path="/var/lib/kubelet/pods/9e051309-5b38-4162-915b-7591f96ccddf/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.107328 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0377327-c5da-4e36-b9a3-462513bbd9d2" path="/var/lib/kubelet/pods/a0377327-c5da-4e36-b9a3-462513bbd9d2/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.107992 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5" path="/var/lib/kubelet/pods/a7cd3d55-5d1c-4697-ab9b-2e1f37bb40f5/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.108943 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae63cfad-11fb-40ca-955e-2c445948a50c" path="/var/lib/kubelet/pods/ae63cfad-11fb-40ca-955e-2c445948a50c/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.110054 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1597b9b-8273-4a5c-8f46-8a18587e059f" path="/var/lib/kubelet/pods/c1597b9b-8273-4a5c-8f46-8a18587e059f/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.115983 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdee8a29-d4d1-461b-9eeb-5672c3a6e396" path="/var/lib/kubelet/pods/cdee8a29-d4d1-461b-9eeb-5672c3a6e396/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.117420 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1014a0-2a80-40e4-8e5c-ed810afd2320" path="/var/lib/kubelet/pods/ce1014a0-2a80-40e4-8e5c-ed810afd2320/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.118413 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37688d9-015d-40a4-b3f3-606e4d4bdff4" path="/var/lib/kubelet/pods/e37688d9-015d-40a4-b3f3-606e4d4bdff4/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.124031 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e735d2c0-7ac3-4706-b9cb-a53ada8a364b" path="/var/lib/kubelet/pods/e735d2c0-7ac3-4706-b9cb-a53ada8a364b/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.124756 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee43121f-d7e2-4770-81c4-833fbbe0373f" path="/var/lib/kubelet/pods/ee43121f-d7e2-4770-81c4-833fbbe0373f/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.126626 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d9436f-f09c-45b4-945a-4b5f372724d6" path="/var/lib/kubelet/pods/f3d9436f-f09c-45b4-945a-4b5f372724d6/volumes" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.132399 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cdqvp"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.132487 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.132505 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m77bt"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.132823 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerName="nova-cell1-conductor-conductor" containerID="cri-o://3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" gracePeriod=30 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.142373 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.161778 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m77bt"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.173460 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.173661 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerName="nova-cell0-conductor-conductor" containerID="cri-o://0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" gracePeriod=30 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.175694 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.259153 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bb836e5b-f1a1-4d7a-8de0-03cddd650c4a/ovsdbserver-sb/0.log" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.259392 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.263539 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_533092ca-8a4b-4005-909c-32736cde1a1e/ovsdbserver-nb/0.log" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.263599 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.274314 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.314508 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.325632 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:59 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: if [ -n "nova_cell0" ]; then Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="nova_cell0" Mar 10 19:13:59 crc kubenswrapper[4861]: else Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:59 crc kubenswrapper[4861]: fi Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:59 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:59 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:59 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:59 crc kubenswrapper[4861]: # support updates Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.326799 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" podUID="b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1" Mar 10 19:13:59 crc kubenswrapper[4861]: W0310 19:13:59.327138 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbb40fdc_8e7e_4a1e_b7a0_5456081ba068.slice/crio-1c636fbe64e078dd81aa51e09738dd68b20fbfa77c0b920f49c0af409fd60677 WatchSource:0}: Error finding container 1c636fbe64e078dd81aa51e09738dd68b20fbfa77c0b920f49c0af409fd60677: Status 404 returned error can't find the container with id 1c636fbe64e078dd81aa51e09738dd68b20fbfa77c0b920f49c0af409fd60677 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348751 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348818 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348865 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348907 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.348941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349000 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349056 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349087 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349172 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349260 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75q9\" (UniqueName: \"kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349285 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs\") pod \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\" (UID: \"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.349328 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdvpk\" (UniqueName: \"kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk\") pod \"187a3484-7a9d-499a-91d0-1867ed682d05\" (UID: \"187a3484-7a9d-499a-91d0-1867ed682d05\") " Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.350305 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:59 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: if [ -n "nova_cell1" ]; then Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="nova_cell1" Mar 10 19:13:59 crc kubenswrapper[4861]: else Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:59 crc kubenswrapper[4861]: fi Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:59 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:59 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:59 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:59 crc kubenswrapper[4861]: # support updates Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.350351 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.351268 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts" (OuterVolumeSpecName: "scripts") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.351228 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config" (OuterVolumeSpecName: "config") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.352852 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" podUID="dbb40fdc-8e7e-4a1e-b7a0-5456081ba068" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.356496 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.359863 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.359925 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9" (OuterVolumeSpecName: "kube-api-access-f75q9") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "kube-api-access-f75q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.359969 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk" (OuterVolumeSpecName: "kube-api-access-jdvpk") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "kube-api-access-jdvpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.362389 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-8h5gk"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.382426 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.409827 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.441890 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.452942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.452981 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453039 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453072 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt8qv\" (UniqueName: \"kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453278 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453299 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453506 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453528 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"533092ca-8a4b-4005-909c-32736cde1a1e\" (UID: \"533092ca-8a4b-4005-909c-32736cde1a1e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453929 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdvpk\" (UniqueName: \"kubernetes.io/projected/187a3484-7a9d-499a-91d0-1867ed682d05-kube-api-access-jdvpk\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453947 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453956 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453967 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453976 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.453993 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.454004 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.454013 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75q9\" (UniqueName: \"kubernetes.io/projected/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-kube-api-access-f75q9\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.455309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.455894 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.456270 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config" (OuterVolumeSpecName: "config") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.456935 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts" (OuterVolumeSpecName: "scripts") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.458438 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.458528 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts podName:ee15a5af-fb3a-45fc-bfa1-b9eb45418a32 nodeName:}" failed. No retries permitted until 2026-03-10 19:14:00.458509414 +0000 UTC m=+1584.221945374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts") pod "root-account-create-update-lrz47" (UID: "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32") : configmap "openstack-cell1-scripts" not found Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.466127 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.466562 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.469277 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.477718 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.490052 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.506432 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv" (OuterVolumeSpecName: "kube-api-access-kt8qv") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "kube-api-access-kt8qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.509599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.521927 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.532178 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.545907 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" (UID: "bb836e5b-f1a1-4d7a-8de0-03cddd650c4a"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.547223 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config" (OuterVolumeSpecName: "config") pod "187a3484-7a9d-499a-91d0-1867ed682d05" (UID: "187a3484-7a9d-499a-91d0-1867ed682d05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555294 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config\") pod \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555343 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle\") pod \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555394 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret\") pod \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555420 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpxz8\" (UniqueName: \"kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8\") pod \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\" (UID: \"9007f85d-dd41-49ed-9a6f-c2b09b26fad2\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555786 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555797 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555806 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555817 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555825 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555835 4861 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555852 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555861 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555871 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533092ca-8a4b-4005-909c-32736cde1a1e-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555879 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt8qv\" (UniqueName: \"kubernetes.io/projected/533092ca-8a4b-4005-909c-32736cde1a1e-kube-api-access-kt8qv\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555887 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555895 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.555903 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/187a3484-7a9d-499a-91d0-1867ed682d05-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.562103 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.569961 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8" (OuterVolumeSpecName: "kube-api-access-kpxz8") pod "9007f85d-dd41-49ed-9a6f-c2b09b26fad2" (UID: "9007f85d-dd41-49ed-9a6f-c2b09b26fad2"). InnerVolumeSpecName "kube-api-access-kpxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.580601 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.587843 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "533092ca-8a4b-4005-909c-32736cde1a1e" (UID: "533092ca-8a4b-4005-909c-32736cde1a1e"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.590874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9007f85d-dd41-49ed-9a6f-c2b09b26fad2" (UID: "9007f85d-dd41-49ed-9a6f-c2b09b26fad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.594031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9007f85d-dd41-49ed-9a6f-c2b09b26fad2" (UID: "9007f85d-dd41-49ed-9a6f-c2b09b26fad2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.615770 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.630852 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9007f85d-dd41-49ed-9a6f-c2b09b26fad2" (UID: "9007f85d-dd41-49ed-9a6f-c2b09b26fad2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657894 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657918 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657927 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpxz8\" (UniqueName: \"kubernetes.io/projected/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-kube-api-access-kpxz8\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657936 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657944 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657952 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/533092ca-8a4b-4005-909c-32736cde1a1e-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.657960 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9007f85d-dd41-49ed-9a6f-c2b09b26fad2-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.662138 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-69987f456f-bjbjk" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.174:9696/\": dial tcp 10.217.0.174:9696: connect: connection refused" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.683234 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:13:59 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: if [ -n "neutron" ]; then Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="neutron" Mar 10 19:13:59 crc kubenswrapper[4861]: else Mar 10 19:13:59 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:13:59 crc kubenswrapper[4861]: fi Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:13:59 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:13:59 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:13:59 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:13:59 crc kubenswrapper[4861]: # support updates Mar 10 19:13:59 crc kubenswrapper[4861]: Mar 10 19:13:59 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:13:59 crc kubenswrapper[4861]: E0310 19:13:59.684369 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-7890-account-create-update-z8mnl" podUID="da655568-6f44-40e6-af1d-278b701fc52e" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.792846 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.813499 4861 generic.go:334] "Generic (PLEG): container finished" podID="2683e959-ecff-478e-aa0a-acf18f482d39" containerID="2a8c0c46e768dafa2c342d24605ecb36c119cc70e7586a4c3f59339b29b329fe" exitCode=0 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.813874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerDied","Data":"2a8c0c46e768dafa2c342d24605ecb36c119cc70e7586a4c3f59339b29b329fe"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.817057 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bb836e5b-f1a1-4d7a-8de0-03cddd650c4a/ovsdbserver-sb/0.log" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.817117 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bb836e5b-f1a1-4d7a-8de0-03cddd650c4a","Type":"ContainerDied","Data":"5e4a26d7e404af97cbdcaf5e1dc8f3742995e948494f6e7689506efa72981cfc"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.817146 4861 scope.go:117] "RemoveContainer" containerID="fe5ee7032ce5d91ee136401e0e5a8ee13f7696e2e7c6cc9ccb1c1348c985d7d0" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.817259 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.826757 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-z8mnl" event={"ID":"da655568-6f44-40e6-af1d-278b701fc52e","Type":"ContainerStarted","Data":"a76d45a57ba1ac1a9a9565b178962b7513eb596808c1595a75dee40501156883"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.869296 4861 generic.go:334] "Generic (PLEG): container finished" podID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" containerID="94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b" exitCode=137 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.869418 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.897905 4861 generic.go:334] "Generic (PLEG): container finished" podID="ab499a55-1919-491f-8dc6-12344757201d" containerID="a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886" exitCode=0 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.898005 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerDied","Data":"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.924594 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="43df18a6de05237d4ac2005dc34fa3948f76b3a87364c4a8cea9cbd0359fd444" exitCode=0 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.924886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"43df18a6de05237d4ac2005dc34fa3948f76b3a87364c4a8cea9cbd0359fd444"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.929764 4861 generic.go:334] "Generic (PLEG): container finished" podID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerID="3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57" exitCode=143 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.929894 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerDied","Data":"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.951852 4861 generic.go:334] "Generic (PLEG): container finished" podID="1b1af633-31f5-4658-bda4-fc9c010d6280" containerID="d0530b4bb88089a565eb73c29ff174432d92837c87c92e1b218d1e31d36d1ebd" exitCode=0 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.952173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b1af633-31f5-4658-bda4-fc9c010d6280","Type":"ContainerDied","Data":"d0530b4bb88089a565eb73c29ff174432d92837c87c92e1b218d1e31d36d1ebd"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.964812 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.964922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.964942 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.965029 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlzmq\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.965060 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.965720 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.965790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.965815 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd\") pod \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\" (UID: \"97db979f-75cb-4e7e-9dc6-0c65f39fef8e\") " Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.966911 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.974349 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.986815 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" exitCode=0 Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.986904 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerDied","Data":"ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc"} Mar 10 19:13:59 crc kubenswrapper[4861]: I0310 19:13:59.998111 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.005404 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_533092ca-8a4b-4005-909c-32736cde1a1e/ovsdbserver-nb/0.log" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.005487 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"533092ca-8a4b-4005-909c-32736cde1a1e","Type":"ContainerDied","Data":"9249c9f0d5bf5899851668ab241bf4ebb4bd98e751d012356261ed81f1b846b2"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.005565 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.010719 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq" (OuterVolumeSpecName: "kube-api-access-mlzmq") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "kube-api-access-mlzmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.012564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" event={"ID":"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068","Type":"ContainerStarted","Data":"1c636fbe64e078dd81aa51e09738dd68b20fbfa77c0b920f49c0af409fd60677"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.049063 4861 generic.go:334] "Generic (PLEG): container finished" podID="930df8f4-7ebf-4425-976f-4f52654586bb" containerID="6eb92aeb41c6749ec876f80e99358ba49b19a270c11ada403487be3ca0ef76c4" exitCode=143 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.049149 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerDied","Data":"6eb92aeb41c6749ec876f80e99358ba49b19a270c11ada403487be3ca0ef76c4"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.050535 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.062423 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.064313 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data" (OuterVolumeSpecName: "config-data") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071215 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071247 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlzmq\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-kube-api-access-mlzmq\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071257 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071265 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071273 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071281 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.071289 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.075993 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-q8wl5" event={"ID":"1d0a3228-ad14-4cd8-8c5a-969bad245ecf","Type":"ContainerStarted","Data":"320e3767547e4f1aa557264cd5b241fc9fa22e42f2fa0b34406667a76dcf22a3"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.090679 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97db979f-75cb-4e7e-9dc6-0c65f39fef8e" (UID: "97db979f-75cb-4e7e-9dc6-0c65f39fef8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.104538 4861 generic.go:334] "Generic (PLEG): container finished" podID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerID="e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184" exitCode=143 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.104639 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerDied","Data":"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139065 4861 generic.go:334] "Generic (PLEG): container finished" podID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerID="a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" exitCode=0 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139098 4861 generic.go:334] "Generic (PLEG): container finished" podID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerID="985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" exitCode=0 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139182 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerDied","Data":"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139224 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerDied","Data":"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" event={"ID":"97db979f-75cb-4e7e-9dc6-0c65f39fef8e","Type":"ContainerDied","Data":"18012cb40d9deff9ef05b17c1356be10d9edaff1e34afacc9b7fa1a0adf864d8"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.139317 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.150758 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552834-dlwsb"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.174679 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97db979f-75cb-4e7e-9dc6-0c65f39fef8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.187463 4861 generic.go:334] "Generic (PLEG): container finished" podID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerID="6d5f161e1ebe942db8f22888f7d4ee605ae096c419576acb0dffd5c4a5831534" exitCode=143 Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.190462 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="ovsdbserver-sb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.190572 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="ovsdbserver-sb" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.190667 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="init" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.190761 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="init" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.190858 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.190961 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.191135 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-httpd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.192403 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-httpd" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.192526 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-server" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.192733 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-server" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.192832 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="ovsdbserver-nb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.192914 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="ovsdbserver-nb" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.192982 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.193043 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.193114 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="dnsmasq-dns" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.193180 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="dnsmasq-dns" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.193278 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.193367 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194021 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194129 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194219 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" containerName="dnsmasq-dns" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194577 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" containerName="ovsdbserver-sb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194681 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="openstack-network-exporter" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194826 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-httpd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.194989 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" containerName="ovsdbserver-nb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.195090 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-server" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.198677 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552834-dlwsb"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.199377 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" event={"ID":"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1","Type":"ContainerStarted","Data":"6009b6226d39d809fee813aa918572ea9d751f27e3158eab4518e67a055ce789"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.199645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerDied","Data":"6d5f161e1ebe942db8f22888f7d4ee605ae096c419576acb0dffd5c4a5831534"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.202114 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerDied","Data":"4b1f746cd725920aeee99b23e4c08bf7d8c523580d3e57266ad014c8ed8e2ed0"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.201801 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.190757 4861 generic.go:334] "Generic (PLEG): container finished" podID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerID="4b1f746cd725920aeee99b23e4c08bf7d8c523580d3e57266ad014c8ed8e2ed0" exitCode=143 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.206059 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.206246 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.206452 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.209811 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.212793 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.235923 4861 generic.go:334] "Generic (PLEG): container finished" podID="692615cd-3dd2-4970-9d35-63073e2403ba" containerID="8c0527061a420c2c42f4d25de783f4827f7ddac4b9a8e5486e1239c416673684" exitCode=0 Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.235990 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerDied","Data":"8c0527061a420c2c42f4d25de783f4827f7ddac4b9a8e5486e1239c416673684"} Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.244844 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.244890 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.248949 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e6bf-account-create-update-rgnpm" event={"ID":"87dcb8b8-2ebd-44e9-a15f-e995495d8b32","Type":"ContainerStarted","Data":"f4683c8018949c3dddaea89d49d373b5d35be145a9710fb149b4d07ae31d9703"} Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.259259 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.259331 4861 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-lrz47" secret="" err="secret \"galera-openstack-cell1-dockercfg-2xqvk\" not found" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.259641 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-9z2z2" event={"ID":"187a3484-7a9d-499a-91d0-1867ed682d05","Type":"ContainerDied","Data":"3375b778531da4bc5dd9e00f00bc2e67e75082b6e8c434746d0eb70335e35a49"} Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.280521 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:14:00 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: if [ -n "" ]; then Mar 10 19:14:00 crc kubenswrapper[4861]: GRANT_DATABASE="" Mar 10 19:14:00 crc kubenswrapper[4861]: else Mar 10 19:14:00 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:14:00 crc kubenswrapper[4861]: fi Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:14:00 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:14:00 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:14:00 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:14:00 crc kubenswrapper[4861]: # support updates Mar 10 19:14:00 crc kubenswrapper[4861]: Mar 10 19:14:00 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.283041 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-lrz47" podUID="ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.347461 4861 scope.go:117] "RemoveContainer" containerID="d0df70b068b7e2c89545406ff08c2cfd24ffaf8b8337d1abdd32af448ab278bc" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.382956 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.383076 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99vp\" (UniqueName: \"kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp\") pod \"auto-csr-approver-29552834-dlwsb\" (UID: \"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c\") " pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.390466 4861 scope.go:117] "RemoveContainer" containerID="94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.423978 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.433784 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.441257 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-9z2z2"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.458127 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.467523 4861 scope.go:117] "RemoveContainer" containerID="94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.469818 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b\": container with ID starting with 94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b not found: ID does not exist" containerID="94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.469844 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b"} err="failed to get container status \"94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b\": rpc error: code = NotFound desc = could not find container \"94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b\": container with ID starting with 94acf945686639ad254fe41f715e4c432f0b2ea1aeb6417fba3df413dac91d1b not found: ID does not exist" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.469866 4861 scope.go:117] "RemoveContainer" containerID="563ee7a1f5b83d7d42165f459dccecfc60f2f551d3f0e59ace85417509a8ec48" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.475869 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.487838 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497305 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5tq\" (UniqueName: \"kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq\") pod \"da655568-6f44-40e6-af1d-278b701fc52e\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497342 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs\") pod \"1b1af633-31f5-4658-bda4-fc9c010d6280\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497373 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497413 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgpqk\" (UniqueName: \"kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk\") pod \"1b1af633-31f5-4658-bda4-fc9c010d6280\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497434 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcwmd\" (UniqueName: \"kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497723 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497746 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497769 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs\") pod \"1b1af633-31f5-4658-bda4-fc9c010d6280\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497790 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497810 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle\") pod \"1b1af633-31f5-4658-bda4-fc9c010d6280\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497853 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts\") pod \"da655568-6f44-40e6-af1d-278b701fc52e\" (UID: \"da655568-6f44-40e6-af1d-278b701fc52e\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497883 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config\") pod \"2683e959-ecff-478e-aa0a-acf18f482d39\" (UID: \"2683e959-ecff-478e-aa0a-acf18f482d39\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.497928 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data\") pod \"1b1af633-31f5-4658-bda4-fc9c010d6280\" (UID: \"1b1af633-31f5-4658-bda4-fc9c010d6280\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.498280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99vp\" (UniqueName: \"kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp\") pod \"auto-csr-approver-29552834-dlwsb\" (UID: \"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c\") " pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.510879 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq" (OuterVolumeSpecName: "kube-api-access-8p5tq") pod "da655568-6f44-40e6-af1d-278b701fc52e" (UID: "da655568-6f44-40e6-af1d-278b701fc52e"). InnerVolumeSpecName "kube-api-access-8p5tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.517642 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.519360 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.527473 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.527876 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.528631 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da655568-6f44-40e6-af1d-278b701fc52e" (UID: "da655568-6f44-40e6-af1d-278b701fc52e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.529085 4861 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.529167 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts podName:ee15a5af-fb3a-45fc-bfa1-b9eb45418a32 nodeName:}" failed. No retries permitted until 2026-03-10 19:14:02.529142635 +0000 UTC m=+1586.292578585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts") pod "root-account-create-update-lrz47" (UID: "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32") : configmap "openstack-cell1-scripts" not found Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.530018 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.537654 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd" (OuterVolumeSpecName: "kube-api-access-tcwmd") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "kube-api-access-tcwmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.544784 4861 scope.go:117] "RemoveContainer" containerID="699e5be84dcd8a39bd7ffd4f5ee267f814286d3ca759b64a5eb0a24bee7e3fdd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.550033 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6d6c7bd6d5-klrmq"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.557807 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk" (OuterVolumeSpecName: "kube-api-access-mgpqk") pod "1b1af633-31f5-4658-bda4-fc9c010d6280" (UID: "1b1af633-31f5-4658-bda4-fc9c010d6280"). InnerVolumeSpecName "kube-api-access-mgpqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.571685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99vp\" (UniqueName: \"kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp\") pod \"auto-csr-approver-29552834-dlwsb\" (UID: \"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c\") " pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.573401 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.573586 4861 scope.go:117] "RemoveContainer" containerID="a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.575011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.593889 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601331 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgpqk\" (UniqueName: \"kubernetes.io/projected/1b1af633-31f5-4658-bda4-fc9c010d6280-kube-api-access-mgpqk\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601356 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601364 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2683e959-ecff-478e-aa0a-acf18f482d39-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601375 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcwmd\" (UniqueName: \"kubernetes.io/projected/2683e959-ecff-478e-aa0a-acf18f482d39-kube-api-access-tcwmd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601395 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601405 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da655568-6f44-40e6-af1d-278b701fc52e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601414 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601424 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p5tq\" (UniqueName: \"kubernetes.io/projected/da655568-6f44-40e6-af1d-278b701fc52e-kube-api-access-8p5tq\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.601432 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2683e959-ecff-478e-aa0a-acf18f482d39-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.620693 4861 scope.go:117] "RemoveContainer" containerID="985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.642223 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.646558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "2683e959-ecff-478e-aa0a-acf18f482d39" (UID: "2683e959-ecff-478e-aa0a-acf18f482d39"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.648511 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b1af633-31f5-4658-bda4-fc9c010d6280" (UID: "1b1af633-31f5-4658-bda4-fc9c010d6280"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.658935 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.685005 4861 scope.go:117] "RemoveContainer" containerID="a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.687944 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "1b1af633-31f5-4658-bda4-fc9c010d6280" (UID: "1b1af633-31f5-4658-bda4-fc9c010d6280"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.692087 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data" (OuterVolumeSpecName: "config-data") pod "1b1af633-31f5-4658-bda4-fc9c010d6280" (UID: "1b1af633-31f5-4658-bda4-fc9c010d6280"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.693372 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010\": container with ID starting with a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010 not found: ID does not exist" containerID="a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.693409 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010"} err="failed to get container status \"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010\": rpc error: code = NotFound desc = could not find container \"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010\": container with ID starting with a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010 not found: ID does not exist" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.693434 4861 scope.go:117] "RemoveContainer" containerID="985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.694014 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd\": container with ID starting with 985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd not found: ID does not exist" containerID="985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.694053 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd"} err="failed to get container status \"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd\": rpc error: code = NotFound desc = could not find container \"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd\": container with ID starting with 985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd not found: ID does not exist" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.694065 4861 scope.go:117] "RemoveContainer" containerID="a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.694278 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "1b1af633-31f5-4658-bda4-fc9c010d6280" (UID: "1b1af633-31f5-4658-bda4-fc9c010d6280"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.694671 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010"} err="failed to get container status \"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010\": rpc error: code = NotFound desc = could not find container \"a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010\": container with ID starting with a06874ab7d0cf39ee7cdf29289ca00ec76f54115d05d5a197cc6e53230b06010 not found: ID does not exist" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.694732 4861 scope.go:117] "RemoveContainer" containerID="985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.695205 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd"} err="failed to get container status \"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd\": rpc error: code = NotFound desc = could not find container \"985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd\": container with ID starting with 985fbcd6585b2dfed67bfbcca47fd62b176667adcf86aa98e78a20b0b5c2fedd not found: ID does not exist" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.695262 4861 scope.go:117] "RemoveContainer" containerID="ca5fa990ae2a8c88edbef03bdd1903c7407d516f5028387bb035e3115a89eb99" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703317 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703347 4861 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703358 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703367 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703375 4861 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2683e959-ecff-478e-aa0a-acf18f482d39-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703384 4861 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.703392 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1af633-31f5-4658-bda4-fc9c010d6280-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.723409 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.723870 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.724568 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.727303 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.728694 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:00 crc kubenswrapper[4861]: E0310 19:14:00.728736 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerName="nova-cell1-conductor-conductor" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.732057 4861 scope.go:117] "RemoveContainer" containerID="a2e6bbdd67e19fe250227c0eadee02f0b10d11543800ab23539751037ecb1973" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.736350 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.750835 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.770343 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.906935 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts\") pod \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.906984 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m9bt\" (UniqueName: \"kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt\") pod \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpltx\" (UniqueName: \"kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx\") pod \"1dff60e3-9ca8-461b-8d7e-018b626677e8\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907749 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts\") pod \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\" (UID: \"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907799 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs\") pod \"1dff60e3-9ca8-461b-8d7e-018b626677e8\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907827 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data\") pod \"1dff60e3-9ca8-461b-8d7e-018b626677e8\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907890 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle\") pod \"1dff60e3-9ca8-461b-8d7e-018b626677e8\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907918 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs7x\" (UniqueName: \"kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x\") pod \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\" (UID: \"1d0a3228-ad14-4cd8-8c5a-969bad245ecf\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907970 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts\") pod \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.907994 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhkd\" (UniqueName: \"kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd\") pod \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\" (UID: \"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.908020 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom\") pod \"1dff60e3-9ca8-461b-8d7e-018b626677e8\" (UID: \"1dff60e3-9ca8-461b-8d7e-018b626677e8\") " Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.908395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d0a3228-ad14-4cd8-8c5a-969bad245ecf" (UID: "1d0a3228-ad14-4cd8-8c5a-969bad245ecf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.911848 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbb40fdc-8e7e-4a1e-b7a0-5456081ba068" (UID: "dbb40fdc-8e7e-4a1e-b7a0-5456081ba068"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.912807 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1" (UID: "b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.912907 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx" (OuterVolumeSpecName: "kube-api-access-fpltx") pod "1dff60e3-9ca8-461b-8d7e-018b626677e8" (UID: "1dff60e3-9ca8-461b-8d7e-018b626677e8"). InnerVolumeSpecName "kube-api-access-fpltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.913072 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs" (OuterVolumeSpecName: "logs") pod "1dff60e3-9ca8-461b-8d7e-018b626677e8" (UID: "1dff60e3-9ca8-461b-8d7e-018b626677e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.915329 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1dff60e3-9ca8-461b-8d7e-018b626677e8" (UID: "1dff60e3-9ca8-461b-8d7e-018b626677e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.916879 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x" (OuterVolumeSpecName: "kube-api-access-6vs7x") pod "1d0a3228-ad14-4cd8-8c5a-969bad245ecf" (UID: "1d0a3228-ad14-4cd8-8c5a-969bad245ecf"). InnerVolumeSpecName "kube-api-access-6vs7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.918466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd" (OuterVolumeSpecName: "kube-api-access-4lhkd") pod "dbb40fdc-8e7e-4a1e-b7a0-5456081ba068" (UID: "dbb40fdc-8e7e-4a1e-b7a0-5456081ba068"). InnerVolumeSpecName "kube-api-access-4lhkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.918544 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt" (OuterVolumeSpecName: "kube-api-access-9m9bt") pod "b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1" (UID: "b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1"). InnerVolumeSpecName "kube-api-access-9m9bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:00 crc kubenswrapper[4861]: I0310 19:14:00.968408 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dff60e3-9ca8-461b-8d7e-018b626677e8" (UID: "1dff60e3-9ca8-461b-8d7e-018b626677e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:00.989811 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data" (OuterVolumeSpecName: "config-data") pod "1dff60e3-9ca8-461b-8d7e-018b626677e8" (UID: "1dff60e3-9ca8-461b-8d7e-018b626677e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.009411 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea" path="/var/lib/kubelet/pods/1626ff9d-fded-4fbc-aa2c-f6f984bcf4ea/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.010055 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187a3484-7a9d-499a-91d0-1867ed682d05" path="/var/lib/kubelet/pods/187a3484-7a9d-499a-91d0-1867ed682d05/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.010679 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533092ca-8a4b-4005-909c-32736cde1a1e" path="/var/lib/kubelet/pods/533092ca-8a4b-4005-909c-32736cde1a1e/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.011694 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9007f85d-dd41-49ed-9a6f-c2b09b26fad2" path="/var/lib/kubelet/pods/9007f85d-dd41-49ed-9a6f-c2b09b26fad2/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.012207 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" path="/var/lib/kubelet/pods/97db979f-75cb-4e7e-9dc6-0c65f39fef8e/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.012840 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb836e5b-f1a1-4d7a-8de0-03cddd650c4a" path="/var/lib/kubelet/pods/bb836e5b-f1a1-4d7a-8de0-03cddd650c4a/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.014319 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da is running failed: container process not found" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.015778 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da is running failed: container process not found" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.017753 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da is running failed: container process not found" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.017785 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerName="nova-cell0-conductor-conductor" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017925 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhkd\" (UniqueName: \"kubernetes.io/projected/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-kube-api-access-4lhkd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017946 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017956 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017964 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m9bt\" (UniqueName: \"kubernetes.io/projected/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-kube-api-access-9m9bt\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017972 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpltx\" (UniqueName: \"kubernetes.io/projected/1dff60e3-9ca8-461b-8d7e-018b626677e8-kube-api-access-fpltx\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017982 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017990 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dff60e3-9ca8-461b-8d7e-018b626677e8-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.017998 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.018006 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dff60e3-9ca8-461b-8d7e-018b626677e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.018014 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs7x\" (UniqueName: \"kubernetes.io/projected/1d0a3228-ad14-4cd8-8c5a-969bad245ecf-kube-api-access-6vs7x\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.018022 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.019121 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c73a13-57f7-43aa-8e0a-ba36a3195653" path="/var/lib/kubelet/pods/d0c73a13-57f7-43aa-8e0a-ba36a3195653/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.019841 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61c59b6-f849-406f-8680-cb83de220b46" path="/var/lib/kubelet/pods/e61c59b6-f849-406f-8680-cb83de220b46/volumes" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.120223 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.120614 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data podName:9fa4a97d-682a-40eb-93e0-5f5167ddb0a0 nodeName:}" failed. No retries permitted until 2026-03-10 19:14:05.120595927 +0000 UTC m=+1588.884031887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data") pod "rabbitmq-cell1-server-0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0") : configmap "rabbitmq-cell1-config-data" not found Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.177504 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220559 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220610 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220674 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5npx2\" (UniqueName: \"kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220700 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.220782 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id\") pod \"ab499a55-1919-491f-8dc6-12344757201d\" (UID: \"ab499a55-1919-491f-8dc6-12344757201d\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.221104 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.233679 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.233699 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.242861 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts" (OuterVolumeSpecName: "scripts") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.251911 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.258457 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261104 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2" (OuterVolumeSpecName: "kube-api-access-5npx2") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "kube-api-access-5npx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261167 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261535 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1af633-31f5-4658-bda4-fc9c010d6280" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261551 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1af633-31f5-4658-bda4-fc9c010d6280" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261563 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="mysql-bootstrap" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261569 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="mysql-bootstrap" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261591 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker-log" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261597 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker-log" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261606 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261613 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261621 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="probe" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261627 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="probe" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261680 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261687 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261696 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="galera" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261713 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="galera" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261727 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener-log" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261733 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener-log" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261744 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="cinder-scheduler" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261750 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="cinder-scheduler" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.261758 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerName="nova-cell0-conductor-conductor" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261763 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerName="nova-cell0-conductor-conductor" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261944 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener-log" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261955 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerName="barbican-keystone-listener" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261969 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker-log" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261982 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerName="nova-cell0-conductor-conductor" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.261995 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="cinder-scheduler" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.262005 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerName="barbican-worker" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.262013 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab499a55-1919-491f-8dc6-12344757201d" containerName="probe" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.262021 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" containerName="galera" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.262028 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1af633-31f5-4658-bda4-fc9c010d6280" containerName="nova-cell1-novncproxy-novncproxy" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.262576 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.268565 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.287585 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.288091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b1af633-31f5-4658-bda4-fc9c010d6280","Type":"ContainerDied","Data":"78ebc06f82f87acf408c5782b175e02038202680597f4e968740be0ad4b62432"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.288140 4861 scope.go:117] "RemoveContainer" containerID="d0530b4bb88089a565eb73c29ff174432d92837c87c92e1b218d1e31d36d1ebd" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.288249 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.296045 4861 generic.go:334] "Generic (PLEG): container finished" podID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" containerID="25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683" exitCode=0 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.296100 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerDied","Data":"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.296124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" event={"ID":"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b","Type":"ContainerDied","Data":"64e9ce47a0560f1ca8e8e249135ed7ec90f15608963f6de1fa05e5a5119e978e"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.296185 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d66d9f78-7w6cc" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.313474 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e6bf-account-create-update-rgnpm" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.313823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e6bf-account-create-update-rgnpm" event={"ID":"87dcb8b8-2ebd-44e9-a15f-e995495d8b32","Type":"ContainerDied","Data":"f4683c8018949c3dddaea89d49d373b5d35be145a9710fb149b4d07ae31d9703"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.323882 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle\") pod \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.323957 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svk56\" (UniqueName: \"kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56\") pod \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.323989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data\") pod \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324097 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6j4d\" (UniqueName: \"kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d\") pod \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324119 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs\") pod \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324151 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts\") pod \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\" (UID: \"87dcb8b8-2ebd-44e9-a15f-e995495d8b32\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324218 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom\") pod \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324261 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle\") pod \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324282 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxrfj\" (UniqueName: \"kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj\") pod \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\" (UID: \"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324304 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data\") pod \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\" (UID: \"0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324627 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324753 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnktv\" (UniqueName: \"kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324811 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324821 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324831 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5npx2\" (UniqueName: \"kubernetes.io/projected/ab499a55-1919-491f-8dc6-12344757201d-kube-api-access-5npx2\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324856 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab499a55-1919-491f-8dc6-12344757201d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.324944 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs" (OuterVolumeSpecName: "logs") pod "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" (UID: "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.326512 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87dcb8b8-2ebd-44e9-a15f-e995495d8b32" (UID: "87dcb8b8-2ebd-44e9-a15f-e995495d8b32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.329992 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56" (OuterVolumeSpecName: "kube-api-access-svk56") pod "87dcb8b8-2ebd-44e9-a15f-e995495d8b32" (UID: "87dcb8b8-2ebd-44e9-a15f-e995495d8b32"). InnerVolumeSpecName "kube-api-access-svk56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.342360 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7890-account-create-update-z8mnl" event={"ID":"da655568-6f44-40e6-af1d-278b701fc52e","Type":"ContainerDied","Data":"a76d45a57ba1ac1a9a9565b178962b7513eb596808c1595a75dee40501156883"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.342436 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7890-account-create-update-z8mnl" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.343217 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.345961 4861 scope.go:117] "RemoveContainer" containerID="25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.347323 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d" (OuterVolumeSpecName: "kube-api-access-n6j4d") pod "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" (UID: "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b"). InnerVolumeSpecName "kube-api-access-n6j4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.347409 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj" (OuterVolumeSpecName: "kube-api-access-wxrfj") pod "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" (UID: "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd"). InnerVolumeSpecName "kube-api-access-wxrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.347445 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" (UID: "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.362896 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" event={"ID":"b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1","Type":"ContainerDied","Data":"6009b6226d39d809fee813aa918572ea9d751f27e3158eab4518e67a055ce789"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.362973 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf7a-account-create-update-wf5cp" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.364902 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.365736 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.381153 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data" (OuterVolumeSpecName: "config-data") pod "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" (UID: "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.388252 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.395045 4861 scope.go:117] "RemoveContainer" containerID="e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.395301 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2683e959-ecff-478e-aa0a-acf18f482d39","Type":"ContainerDied","Data":"0ac2105ab841fa2f06009a7ad42a27505a728c3e994a1f87d8941b356bea3825"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.396889 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.427920 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle\") pod \"d523d4ef-a5fe-47c3-b174-b2aefc766755\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428063 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data\") pod \"d523d4ef-a5fe-47c3-b174-b2aefc766755\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428126 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgqcr\" (UniqueName: \"kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr\") pod \"d523d4ef-a5fe-47c3-b174-b2aefc766755\" (UID: \"d523d4ef-a5fe-47c3-b174-b2aefc766755\") " Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnktv\" (UniqueName: \"kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428641 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxrfj\" (UniqueName: \"kubernetes.io/projected/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-kube-api-access-wxrfj\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428653 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428662 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svk56\" (UniqueName: \"kubernetes.io/projected/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-kube-api-access-svk56\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428670 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428679 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6j4d\" (UniqueName: \"kubernetes.io/projected/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-kube-api-access-n6j4d\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428687 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428696 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87dcb8b8-2ebd-44e9-a15f-e995495d8b32-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.428719 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.429338 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.429836 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data" (OuterVolumeSpecName: "config-data") pod "ab499a55-1919-491f-8dc6-12344757201d" (UID: "ab499a55-1919-491f-8dc6-12344757201d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.444001 4861 generic.go:334] "Generic (PLEG): container finished" podID="1dff60e3-9ca8-461b-8d7e-018b626677e8" containerID="089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c" exitCode=0 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.444094 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerDied","Data":"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.444122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d79dbd48c-d74zl" event={"ID":"1dff60e3-9ca8-461b-8d7e-018b626677e8","Type":"ContainerDied","Data":"3c4835bc7fa99120f1d9a9c1c3388848a1d6b09cfcf3007463ca41c1efd036af"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.444190 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d79dbd48c-d74zl" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.444215 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr" (OuterVolumeSpecName: "kube-api-access-dgqcr") pod "d523d4ef-a5fe-47c3-b174-b2aefc766755" (UID: "d523d4ef-a5fe-47c3-b174-b2aefc766755"). InnerVolumeSpecName "kube-api-access-dgqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.460352 4861 scope.go:117] "RemoveContainer" containerID="25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.473080 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683\": container with ID starting with 25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683 not found: ID does not exist" containerID="25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.473134 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683"} err="failed to get container status \"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683\": rpc error: code = NotFound desc = could not find container \"25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683\": container with ID starting with 25d62dcf2f9ee02dabc824de069bb687d2e04cbd1aeea376d842ffcff19fc683 not found: ID does not exist" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.473157 4861 scope.go:117] "RemoveContainer" containerID="e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.481733 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnktv\" (UniqueName: \"kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv\") pod \"root-account-create-update-495qf\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.490358 4861 generic.go:334] "Generic (PLEG): container finished" podID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerID="3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" exitCode=0 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.490446 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d523d4ef-a5fe-47c3-b174-b2aefc766755","Type":"ContainerDied","Data":"3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc"} Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.490504 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184\": container with ID starting with e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184 not found: ID does not exist" containerID="e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.490527 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184"} err="failed to get container status \"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184\": rpc error: code = NotFound desc = could not find container \"e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184\": container with ID starting with e7ad0e828f858766d2e6ad1bdb4f2618535e12c3dbd09fac481532edeab03184 not found: ID does not exist" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.490544 4861 scope.go:117] "RemoveContainer" containerID="2a8c0c46e768dafa2c342d24605ecb36c119cc70e7586a4c3f59339b29b329fe" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.490623 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.501649 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" (UID: "9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.523029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d523d4ef-a5fe-47c3-b174-b2aefc766755" (UID: "d523d4ef-a5fe-47c3-b174-b2aefc766755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.532300 4861 generic.go:334] "Generic (PLEG): container finished" podID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" exitCode=0 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.532394 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd","Type":"ContainerDied","Data":"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.532421 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd","Type":"ContainerDied","Data":"b6a7d3b9252abad782b9dfca604f83f73a1b5ecf3b150762169a8607e013cca5"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.532485 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.539500 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.539795 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-central-agent" containerID="cri-o://6c09f38909e1c2592c7435cdb6107fe20a357f1ed362d43e8283dd3143d20885" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.539926 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="proxy-httpd" containerID="cri-o://ae3d77023ba112fc99fe18d4758acbc6d9046be7d58409111c17e57dab6ae470" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.539963 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="sg-core" containerID="cri-o://4fdb51d57efd04be1d17c090ac4d170f60279b658b8137c8e7edb527c392d4ce" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.539995 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-notification-agent" containerID="cri-o://20d0f039122c6a0d6f5c8792e4c4a4a041ef8e480ba4dd21d575218242541879" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.555489 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.563546 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data" (OuterVolumeSpecName: "config-data") pod "d523d4ef-a5fe-47c3-b174-b2aefc766755" (UID: "d523d4ef-a5fe-47c3-b174-b2aefc766755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.568010 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgqcr\" (UniqueName: \"kubernetes.io/projected/d523d4ef-a5fe-47c3-b174-b2aefc766755-kube-api-access-dgqcr\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.568040 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab499a55-1919-491f-8dc6-12344757201d-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.568051 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.572096 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52e5-account-create-update-q8wl5" event={"ID":"1d0a3228-ad14-4cd8-8c5a-969bad245ecf","Type":"ContainerDied","Data":"320e3767547e4f1aa557264cd5b241fc9fa22e42f2fa0b34406667a76dcf22a3"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.572203 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52e5-account-create-update-q8wl5" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.591283 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-495qf" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.591616 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" (UID: "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.591910 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.592268 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.601719 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.626689 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.626732 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.626862 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.639639 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" event={"ID":"dbb40fdc-8e7e-4a1e-b7a0-5456081ba068","Type":"ContainerDied","Data":"1c636fbe64e078dd81aa51e09738dd68b20fbfa77c0b920f49c0af409fd60677"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.640470 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f866-account-create-update-fl8b6" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.643775 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data" (OuterVolumeSpecName: "config-data") pod "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" (UID: "0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.652389 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:01 crc kubenswrapper[4861]: E0310 19:14:01.652471 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.689951 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.695451 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.712736 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zvlgw" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" probeResult="failure" output=< Mar 10 19:14:01 crc kubenswrapper[4861]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Mar 10 19:14:01 crc kubenswrapper[4861]: > Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.726932 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d523d4ef-a5fe-47c3-b174-b2aefc766755-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.726951 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.745853 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.748596 4861 generic.go:334] "Generic (PLEG): container finished" podID="ab499a55-1919-491f-8dc6-12344757201d" containerID="001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec" exitCode=0 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.748931 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.749021 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerDied","Data":"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.755590 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab499a55-1919-491f-8dc6-12344757201d","Type":"ContainerDied","Data":"0dcdea6b1679c0252481963ecf23fffd0c90847e454359d0dea47ee1537c3968"} Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.778879 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7890-account-create-update-z8mnl"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.793300 4861 scope.go:117] "RemoveContainer" containerID="4e50a9575b6dbc384ba88128df76db7281b44b74dee1550f25daab4f4ad79abc" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.833305 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.173:8776/healthcheck\": read tcp 10.217.0.2:56156->10.217.0.173:8776: read: connection reset by peer" Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.838338 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.844115 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" containerName="kube-state-metrics" containerID="cri-o://df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.846082 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.858677 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cf7a-account-create-update-wf5cp"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.874597 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.894876 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.909750 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552834-dlwsb"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.925433 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.941863 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7d79dbd48c-d74zl"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.961999 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6638-account-create-update-kc2pp"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.981469 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.982959 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" containerName="memcached" containerID="cri-o://e208585919e167041ae926b5358edd36d959771ed35af5c54fe2d53fe8efa41a" gracePeriod=30 Mar 10 19:14:01 crc kubenswrapper[4861]: I0310 19:14:01.993593 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6638-account-create-update-kc2pp"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.001000 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6638-account-create-update-fqljq"] Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.001403 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerName="nova-cell1-conductor-conductor" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.001419 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerName="nova-cell1-conductor-conductor" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.001577 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" containerName="nova-cell1-conductor-conductor" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.002169 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.003644 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.012586 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.022406 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.022658 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-75ff4ff987-k4jks" podUID="fb082653-4ce1-4696-b6fb-e6af12109812" containerName="keystone-api" containerID="cri-o://f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29" gracePeriod=30 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.033224 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-52e5-account-create-update-q8wl5"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.043036 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6638-account-create-update-fqljq"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.054841 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6tt6m"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.055881 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppg7\" (UniqueName: \"kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.055948 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.078880 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:57264->10.217.0.216:8775: read: connection reset by peer" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.079142 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:57254->10.217.0.216:8775: read: connection reset by peer" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.088249 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6tt6m"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.094582 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2tvlz"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.100979 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2tvlz"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.108143 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.115004 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.118177 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.211:3000/\": dial tcp 10.217.0.211:3000: connect: connection refused" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.122970 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.133286 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6l9h7"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.143812 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6l9h7"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.157718 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppg7\" (UniqueName: \"kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.157783 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.157976 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.158026 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:02.65801251 +0000 UTC m=+1586.421448470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : configmap "openstack-scripts" not found Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.166428 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6638-account-create-update-fqljq"] Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.167918 4861 projected.go:194] Error preparing data for projected volume kube-api-access-mppg7 for pod openstack/keystone-6638-account-create-update-fqljq: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.167976 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7 podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:02.667960685 +0000 UTC m=+1586.431396645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mppg7" (UniqueName: "kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.182660 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.195180 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f866-account-create-update-fl8b6"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.207146 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.219858 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d84cf8948-mg4jb" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:46206->10.217.0.168:9311: read: connection reset by peer" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.219879 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d84cf8948-mg4jb" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": read tcp 10.217.0.2:46208->10.217.0.168:9311: read: connection reset by peer" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.221565 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.226225 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e6bf-account-create-update-rgnpm"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.309746 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="galera" containerID="cri-o://c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e" gracePeriod=30 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.351266 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.364008 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.369304 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-mppg7 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-6638-account-create-update-fqljq" podUID="c6c9fd69-da59-4e5d-9630-32a42c1dc30e" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.385976 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.387754 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.391584 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrz47" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.401457 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.402959 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.429122 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.436965 4861 scope.go:117] "RemoveContainer" containerID="089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463456 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq4bp\" (UniqueName: \"kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp\") pod \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463508 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463578 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463637 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts\") pod \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\" (UID: \"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463661 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463686 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463747 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463767 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbc9q\" (UniqueName: \"kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463831 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463856 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463885 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463905 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.463943 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.464022 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfv6\" (UniqueName: \"kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.464052 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.464080 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\" (UID: \"ef3a31e3-d3ba-4f5c-950a-1355bb61f657\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.464122 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs\") pod \"9140f7c5-893a-4128-85aa-2db96537b483\" (UID: \"9140f7c5-893a-4128-85aa-2db96537b483\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.470338 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7d66d9f78-7w6cc"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.475242 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs" (OuterVolumeSpecName: "logs") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.477229 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" (UID: "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.478435 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.479244 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs" (OuterVolumeSpecName: "logs") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.482084 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp" (OuterVolumeSpecName: "kube-api-access-mq4bp") pod "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" (UID: "ee15a5af-fb3a-45fc-bfa1-b9eb45418a32"). InnerVolumeSpecName "kube-api-access-mq4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.483394 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts" (OuterVolumeSpecName: "scripts") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.485513 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts" (OuterVolumeSpecName: "scripts") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.492621 4861 scope.go:117] "RemoveContainer" containerID="3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.530760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.530904 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6" (OuterVolumeSpecName: "kube-api-access-xwfv6") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "kube-api-access-xwfv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.539099 4861 scope.go:117] "RemoveContainer" containerID="089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.543795 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c\": container with ID starting with 089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c not found: ID does not exist" containerID="089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.543843 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c"} err="failed to get container status \"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c\": rpc error: code = NotFound desc = could not find container \"089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c\": container with ID starting with 089b96e97216568cdb4fbb8c2c133afa10fecad9fb2b8caed329a4989628a54c not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.543870 4861 scope.go:117] "RemoveContainer" containerID="3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.544206 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57\": container with ID starting with 3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57 not found: ID does not exist" containerID="3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.544227 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57"} err="failed to get container status \"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57\": rpc error: code = NotFound desc = could not find container \"3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57\": container with ID starting with 3b70421bd87d6bdf27fe679226a122192d55a1ff889b9eda172d225605e8cb57 not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.544240 4861 scope.go:117] "RemoveContainer" containerID="3ba3330672986fc68b7b9918ccf2611ea231c5cd8a95c24b12a274707371affc" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.548185 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q" (OuterVolumeSpecName: "kube-api-access-zbc9q") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "kube-api-access-zbc9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.554977 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.570845 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571096 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9140f7c5-893a-4128-85aa-2db96537b483-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571248 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571367 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfv6\" (UniqueName: \"kubernetes.io/projected/9140f7c5-893a-4128-85aa-2db96537b483-kube-api-access-xwfv6\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571489 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571680 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571864 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq4bp\" (UniqueName: \"kubernetes.io/projected/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-kube-api-access-mq4bp\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.571991 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.572172 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.572294 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.572359 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbc9q\" (UniqueName: \"kubernetes.io/projected/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-kube-api-access-zbc9q\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.570908 4861 scope.go:117] "RemoveContainer" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.572842 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data" (OuterVolumeSpecName: "config-data") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.579330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data" (OuterVolumeSpecName: "config-data") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.599300 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.615285 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.617600 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef3a31e3-d3ba-4f5c-950a-1355bb61f657" (UID: "ef3a31e3-d3ba-4f5c-950a-1355bb61f657"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.624979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.633011 4861 scope.go:117] "RemoveContainer" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.635210 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da\": container with ID starting with 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da not found: ID does not exist" containerID="0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.635254 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da"} err="failed to get container status \"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da\": rpc error: code = NotFound desc = could not find container \"0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da\": container with ID starting with 0904b4b9e3cd9ce225079b0b912d86c870f80dd82c4739bd66db6fc72350b2da not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.635282 4861 scope.go:117] "RemoveContainer" containerID="a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.638432 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.641483 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.659281 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.674823 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.674892 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gdgf\" (UniqueName: \"kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.674923 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.674944 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.674983 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675024 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675053 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675087 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config\") pod \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675104 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675134 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle\") pod \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675154 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675169 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbk2f\" (UniqueName: \"kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f\") pod \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675196 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675217 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675258 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wwh9\" (UniqueName: \"kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675275 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675291 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675316 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs\") pod \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\" (UID: \"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675330 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675356 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle\") pod \"509298b8-3d6b-4182-b989-c25c4791ce6b\" (UID: \"509298b8-3d6b-4182-b989-c25c4791ce6b\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675505 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.675627 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppg7\" (UniqueName: \"kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677207 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677226 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677237 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677245 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef3a31e3-d3ba-4f5c-950a-1355bb61f657-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677254 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.677262 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.683638 4861 scope.go:117] "RemoveContainer" containerID="001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.684007 4861 projected.go:194] Error preparing data for projected volume kube-api-access-mppg7 for pod openstack/keystone-6638-account-create-update-fqljq: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.684059 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7 podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:03.684041385 +0000 UTC m=+1587.447477345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mppg7" (UniqueName: "kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.685170 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs" (OuterVolumeSpecName: "logs") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.685466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.689279 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf" (OuterVolumeSpecName: "kube-api-access-6gdgf") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "kube-api-access-6gdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.689686 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs" (OuterVolumeSpecName: "logs") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.694137 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.694333 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.694657 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:03.694637497 +0000 UTC m=+1587.458073457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : configmap "openstack-scripts" not found Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.730522 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts" (OuterVolumeSpecName: "scripts") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.730836 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts" (OuterVolumeSpecName: "scripts") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.731671 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9" (OuterVolumeSpecName: "kube-api-access-6wwh9") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "kube-api-access-6wwh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.735008 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.735851 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f" (OuterVolumeSpecName: "kube-api-access-pbk2f") pod "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" (UID: "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2"). InnerVolumeSpecName "kube-api-access-pbk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.741458 4861 scope.go:117] "RemoveContainer" containerID="a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.742102 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886\": container with ID starting with a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886 not found: ID does not exist" containerID="a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.742142 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886"} err="failed to get container status \"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886\": rpc error: code = NotFound desc = could not find container \"a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886\": container with ID starting with a8146baaa87f96997feb8adc35ad2ceeace3b69ee66ea36c6996074e20d85886 not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.742169 4861 scope.go:117] "RemoveContainer" containerID="001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.742473 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec\": container with ID starting with 001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec not found: ID does not exist" containerID="001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.742502 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec"} err="failed to get container status \"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec\": rpc error: code = NotFound desc = could not find container \"001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec\": container with ID starting with 001bd034047bd1fb56f256a9f728c8c20197f9cb2c630b0bf6f2b961fe0c48ec not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.756209 4861 generic.go:334] "Generic (PLEG): container finished" podID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" containerID="df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f" exitCode=2 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.756281 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2","Type":"ContainerDied","Data":"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.756326 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2","Type":"ContainerDied","Data":"167c7d0b8e951a07b4abc02208342afe802d76dcc658ea89add07c866ef620cd"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.756344 4861 scope.go:117] "RemoveContainer" containerID="df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.756450 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.758988 4861 generic.go:334] "Generic (PLEG): container finished" podID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerID="93c0c7d20bdf5ba3484b74d8194ad89a2b4dad5778c0dc5d0876a0f04995cc86" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.759052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerDied","Data":"93c0c7d20bdf5ba3484b74d8194ad89a2b4dad5778c0dc5d0876a0f04995cc86"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.760462 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" event={"ID":"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c","Type":"ContainerStarted","Data":"c755581050b6ae9a6f0cd7fb58f35e2adb255f4bd2231c7f0edfbebc13880511"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763041 4861 generic.go:334] "Generic (PLEG): container finished" podID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerID="ae3d77023ba112fc99fe18d4758acbc6d9046be7d58409111c17e57dab6ae470" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763064 4861 generic.go:334] "Generic (PLEG): container finished" podID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerID="4fdb51d57efd04be1d17c090ac4d170f60279b658b8137c8e7edb527c392d4ce" exitCode=2 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763074 4861 generic.go:334] "Generic (PLEG): container finished" podID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerID="20d0f039122c6a0d6f5c8792e4c4a4a041ef8e480ba4dd21d575218242541879" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763081 4861 generic.go:334] "Generic (PLEG): container finished" podID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerID="6c09f38909e1c2592c7435cdb6107fe20a357f1ed362d43e8283dd3143d20885" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763076 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerDied","Data":"ae3d77023ba112fc99fe18d4758acbc6d9046be7d58409111c17e57dab6ae470"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerDied","Data":"4fdb51d57efd04be1d17c090ac4d170f60279b658b8137c8e7edb527c392d4ce"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763157 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerDied","Data":"20d0f039122c6a0d6f5c8792e4c4a4a041ef8e480ba4dd21d575218242541879"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.763166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerDied","Data":"6c09f38909e1c2592c7435cdb6107fe20a357f1ed362d43e8283dd3143d20885"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.764225 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.764693 4861 generic.go:334] "Generic (PLEG): container finished" podID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerID="2979a50f5f190d0793c2fc18e07aa06ac371e089c3304abe6dc08d185e172174" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.764801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerDied","Data":"2979a50f5f190d0793c2fc18e07aa06ac371e089c3304abe6dc08d185e172174"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.769157 4861 generic.go:334] "Generic (PLEG): container finished" podID="9140f7c5-893a-4128-85aa-2db96537b483" containerID="e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.769188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerDied","Data":"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.769585 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64c5bb5d74-nmtmm" event={"ID":"9140f7c5-893a-4128-85aa-2db96537b483","Type":"ContainerDied","Data":"1c412a58ce00ee085bd86380d453656709c52d05ec9b8481474604be4d509245"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.769227 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64c5bb5d74-nmtmm" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.771633 4861 generic.go:334] "Generic (PLEG): container finished" podID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerID="439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.771891 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.772249 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerDied","Data":"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.772322 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"509298b8-3d6b-4182-b989-c25c4791ce6b","Type":"ContainerDied","Data":"643322ba32f068134ebcbe3ed48fde9698ce8f32188f5c30981008358a74a46e"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.773603 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" (UID: "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.774483 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9140f7c5-893a-4128-85aa-2db96537b483" (UID: "9140f7c5-893a-4128-85aa-2db96537b483"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.775516 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778169 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbk2f\" (UniqueName: \"kubernetes.io/projected/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-api-access-pbk2f\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778248 4861 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d86917a-2e89-4e29-a1f2-673b0afbf27a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778311 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778368 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wwh9\" (UniqueName: \"kubernetes.io/projected/509298b8-3d6b-4182-b989-c25c4791ce6b-kube-api-access-6wwh9\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778419 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778481 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gdgf\" (UniqueName: \"kubernetes.io/projected/8d86917a-2e89-4e29-a1f2-673b0afbf27a-kube-api-access-6gdgf\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778532 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778587 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778639 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d86917a-2e89-4e29-a1f2-673b0afbf27a-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778722 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778784 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9140f7c5-893a-4128-85aa-2db96537b483-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.778854 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.779059 4861 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.779131 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/509298b8-3d6b-4182-b989-c25c4791ce6b-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.780452 4861 generic.go:334] "Generic (PLEG): container finished" podID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" containerID="e208585919e167041ae926b5358edd36d959771ed35af5c54fe2d53fe8efa41a" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.780493 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d","Type":"ContainerDied","Data":"e208585919e167041ae926b5358edd36d959771ed35af5c54fe2d53fe8efa41a"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.781846 4861 generic.go:334] "Generic (PLEG): container finished" podID="930df8f4-7ebf-4425-976f-4f52654586bb" containerID="6b7111851cf4f58c8e796e6b95e77ca3422cc025ae5304753172c6a1675a0fd1" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.781893 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerDied","Data":"6b7111851cf4f58c8e796e6b95e77ca3422cc025ae5304753172c6a1675a0fd1"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.783048 4861 generic.go:334] "Generic (PLEG): container finished" podID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerID="74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.783085 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerDied","Data":"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.783101 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d86917a-2e89-4e29-a1f2-673b0afbf27a","Type":"ContainerDied","Data":"d881139b2fd2ac6b219cce8335c0620d09b01dbc221ba8d72cb33b9d343c1ec7"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.783158 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.785991 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lrz47" event={"ID":"ee15a5af-fb3a-45fc-bfa1-b9eb45418a32","Type":"ContainerDied","Data":"0f34857216274060cf1bce993781900bce824c3dc69d0253f4e5af38eadacf60"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.786060 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lrz47" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.800928 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerID="0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2" exitCode=0 Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.801010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerDied","Data":"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.801052 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ef3a31e3-d3ba-4f5c-950a-1355bb61f657","Type":"ContainerDied","Data":"42a4b209bcffc36e875f5d3604fbcdee8bae055f5c7543873ac0a25a5c12ee9f"} Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.801094 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.802328 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.813374 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.817588 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.842703 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.845866 4861 scope.go:117] "RemoveContainer" containerID="df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f" Mar 10 19:14:02 crc kubenswrapper[4861]: E0310 19:14:02.853612 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f\": container with ID starting with df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f not found: ID does not exist" containerID="df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.853662 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f"} err="failed to get container status \"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f\": rpc error: code = NotFound desc = could not find container \"df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f\": container with ID starting with df187ab1e346e2781a98eda8c445f3108069bc4bf78574552f5c61fddde8cf7f not found: ID does not exist" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.853684 4861 scope.go:117] "RemoveContainer" containerID="e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.865238 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.867229 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" (UID: "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.879554 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.879842 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880042 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") pod \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\" (UID: \"8d86917a-2e89-4e29-a1f2-673b0afbf27a\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880081 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880112 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmsh\" (UniqueName: \"kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880138 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880197 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880212 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data\") pod \"930df8f4-7ebf-4425-976f-4f52654586bb\" (UID: \"930df8f4-7ebf-4425-976f-4f52654586bb\") " Mar 10 19:14:02 crc kubenswrapper[4861]: W0310 19:14:02.880380 4861 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8d86917a-2e89-4e29-a1f2-673b0afbf27a/volumes/kubernetes.io~secret/public-tls-certs Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880514 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880539 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880665 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.880969 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.881286 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.881507 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.881471 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs" (OuterVolumeSpecName: "logs") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.886368 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh" (OuterVolumeSpecName: "kube-api-access-dcmsh") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "kube-api-access-dcmsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.891559 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data" (OuterVolumeSpecName: "config-data") pod "8d86917a-2e89-4e29-a1f2-673b0afbf27a" (UID: "8d86917a-2e89-4e29-a1f2-673b0afbf27a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.897727 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.905619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" (UID: "7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.909457 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.911009 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lrz47"] Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.930074 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.971522 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data" (OuterVolumeSpecName: "config-data") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.974376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "930df8f4-7ebf-4425-976f-4f52654586bb" (UID: "930df8f4-7ebf-4425-976f-4f52654586bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.976313 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b" path="/var/lib/kubelet/pods/0fd5b5c7-5272-4d6a-b6ab-94ae5b644a3b/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.977039 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1af633-31f5-4658-bda4-fc9c010d6280" path="/var/lib/kubelet/pods/1b1af633-31f5-4658-bda4-fc9c010d6280/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.977630 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0a3228-ad14-4cd8-8c5a-969bad245ecf" path="/var/lib/kubelet/pods/1d0a3228-ad14-4cd8-8c5a-969bad245ecf/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.978081 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dff60e3-9ca8-461b-8d7e-018b626677e8" path="/var/lib/kubelet/pods/1dff60e3-9ca8-461b-8d7e-018b626677e8/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.980645 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2683e959-ecff-478e-aa0a-acf18f482d39" path="/var/lib/kubelet/pods/2683e959-ecff-478e-aa0a-acf18f482d39/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.984859 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd3c011-e602-441e-9682-213853dbb095" path="/var/lib/kubelet/pods/7dd3c011-e602-441e-9682-213853dbb095/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.985358 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc483f5-15c5-4a67-b7e1-adab3b97cec7" path="/var/lib/kubelet/pods/7fc483f5-15c5-4a67-b7e1-adab3b97cec7/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.985867 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87dcb8b8-2ebd-44e9-a15f-e995495d8b32" path="/var/lib/kubelet/pods/87dcb8b8-2ebd-44e9-a15f-e995495d8b32/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986228 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd" path="/var/lib/kubelet/pods/9f3ba611-9f83-40a6-9282-d4e3b0ccfbfd/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986644 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/930df8f4-7ebf-4425-976f-4f52654586bb-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986668 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmsh\" (UniqueName: \"kubernetes.io/projected/930df8f4-7ebf-4425-976f-4f52654586bb-kube-api-access-dcmsh\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986678 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986686 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986694 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986715 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d86917a-2e89-4e29-a1f2-673b0afbf27a-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986724 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/930df8f4-7ebf-4425-976f-4f52654586bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.986733 4861 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.987290 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab499a55-1919-491f-8dc6-12344757201d" path="/var/lib/kubelet/pods/ab499a55-1919-491f-8dc6-12344757201d/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.989405 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1" path="/var/lib/kubelet/pods/b7c0d25e-1650-4ce9-8cb7-71d8d9dceba1/volumes" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.989398 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:02 crc kubenswrapper[4861]: I0310 19:14:02.989749 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d523d4ef-a5fe-47c3-b174-b2aefc766755" path="/var/lib/kubelet/pods/d523d4ef-a5fe-47c3-b174-b2aefc766755/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:02.990205 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da655568-6f44-40e6-af1d-278b701fc52e" path="/var/lib/kubelet/pods/da655568-6f44-40e6-af1d-278b701fc52e/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:02.990991 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb40fdc-8e7e-4a1e-b7a0-5456081ba068" path="/var/lib/kubelet/pods/dbb40fdc-8e7e-4a1e-b7a0-5456081ba068/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:02.991338 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec186926-88f6-4c2f-b44d-e44d62d9d02d" path="/var/lib/kubelet/pods/ec186926-88f6-4c2f-b44d-e44d62d9d02d/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:02.991930 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee15a5af-fb3a-45fc-bfa1-b9eb45418a32" path="/var/lib/kubelet/pods/ee15a5af-fb3a-45fc-bfa1-b9eb45418a32/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:02.992276 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef81bd1a-d4db-4b18-a755-5fc31d09e4dd" path="/var/lib/kubelet/pods/ef81bd1a-d4db-4b18-a755-5fc31d09e4dd/volumes" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.011379 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data" (OuterVolumeSpecName: "config-data") pod "509298b8-3d6b-4182-b989-c25c4791ce6b" (UID: "509298b8-3d6b-4182-b989-c25c4791ce6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.087768 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.087792 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509298b8-3d6b-4182-b989-c25c4791ce6b-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.109517 4861 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 10 19:14:03 crc kubenswrapper[4861]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: if [ -n "" ]; then Mar 10 19:14:03 crc kubenswrapper[4861]: GRANT_DATABASE="" Mar 10 19:14:03 crc kubenswrapper[4861]: else Mar 10 19:14:03 crc kubenswrapper[4861]: GRANT_DATABASE="*" Mar 10 19:14:03 crc kubenswrapper[4861]: fi Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: # going for maximum compatibility here: Mar 10 19:14:03 crc kubenswrapper[4861]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 10 19:14:03 crc kubenswrapper[4861]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 10 19:14:03 crc kubenswrapper[4861]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 10 19:14:03 crc kubenswrapper[4861]: # support updates Mar 10 19:14:03 crc kubenswrapper[4861]: Mar 10 19:14:03 crc kubenswrapper[4861]: $MYSQL_CMD < logger="UnhandledError" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.110809 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-495qf" podUID="cc36f34f-6f91-4076-bc3e-71e2fc0e797e" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.195208 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.195782 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.195818 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.196535 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.207607 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.230886 4861 scope.go:117] "RemoveContainer" containerID="22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.239701 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.254435 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.266560 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290186 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72f42\" (UniqueName: \"kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290248 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290387 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290421 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290456 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290516 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.290612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqlhj\" (UniqueName: \"kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.291825 4861 scope.go:117] "RemoveContainer" containerID="e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.295840 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data\") pod \"ca9800e2-ceed-4197-90f1-97d14c918e45\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.295974 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296024 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296090 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs\") pod \"ca9800e2-ceed-4197-90f1-97d14c918e45\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296165 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296237 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296291 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296355 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296397 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle\") pod \"ca9800e2-ceed-4197-90f1-97d14c918e45\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296433 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rgh\" (UniqueName: \"kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh\") pod \"ca9800e2-ceed-4197-90f1-97d14c918e45\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296460 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs\") pod \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\" (UID: \"a31600f9-88a4-4ecb-8da3-84c966bf4a63\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296491 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs\") pod \"ca9800e2-ceed-4197-90f1-97d14c918e45\" (UID: \"ca9800e2-ceed-4197-90f1-97d14c918e45\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.296510 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs\") pod \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\" (UID: \"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.295856 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.297483 4861 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.298880 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs" (OuterVolumeSpecName: "logs") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.304398 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.312880 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4\": container with ID starting with e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4 not found: ID does not exist" containerID="e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.312930 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4"} err="failed to get container status \"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4\": rpc error: code = NotFound desc = could not find container \"e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4\": container with ID starting with e98ecd9f074cea32b652b4dc4bff3f04e0cf6f8c51cb075afe090dd79b3971a4 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.312956 4861 scope.go:117] "RemoveContainer" containerID="22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.314107 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs" (OuterVolumeSpecName: "logs") pod "ca9800e2-ceed-4197-90f1-97d14c918e45" (UID: "ca9800e2-ceed-4197-90f1-97d14c918e45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.314197 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87\": container with ID starting with 22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87 not found: ID does not exist" containerID="22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.314220 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87"} err="failed to get container status \"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87\": rpc error: code = NotFound desc = could not find container \"22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87\": container with ID starting with 22089e443eba78df73bea89d1cfc14591cb3869b317c834f46d0e6f03d2fcf87 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.314235 4861 scope.go:117] "RemoveContainer" containerID="439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.318887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.324109 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts" (OuterVolumeSpecName: "scripts") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.327074 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42" (OuterVolumeSpecName: "kube-api-access-72f42") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "kube-api-access-72f42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.343480 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.349789 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.350070 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj" (OuterVolumeSpecName: "kube-api-access-hqlhj") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "kube-api-access-hqlhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.350888 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh" (OuterVolumeSpecName: "kube-api-access-v4rgh") pod "ca9800e2-ceed-4197-90f1-97d14c918e45" (UID: "ca9800e2-ceed-4197-90f1-97d14c918e45"). InnerVolumeSpecName "kube-api-access-v4rgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.359319 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.372136 4861 scope.go:117] "RemoveContainer" containerID="ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.379291 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398623 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rgh\" (UniqueName: \"kubernetes.io/projected/ca9800e2-ceed-4197-90f1-97d14c918e45-kube-api-access-v4rgh\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398654 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72f42\" (UniqueName: \"kubernetes.io/projected/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-kube-api-access-72f42\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398664 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398674 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqlhj\" (UniqueName: \"kubernetes.io/projected/a31600f9-88a4-4ecb-8da3-84c966bf4a63-kube-api-access-hqlhj\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398682 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398690 4861 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9800e2-ceed-4197-90f1-97d14c918e45-logs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398698 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.398721 4861 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a31600f9-88a4-4ecb-8da3-84c966bf4a63-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.399385 4861 scope.go:117] "RemoveContainer" containerID="439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.400252 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b\": container with ID starting with 439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b not found: ID does not exist" containerID="439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.400294 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b"} err="failed to get container status \"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b\": rpc error: code = NotFound desc = could not find container \"439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b\": container with ID starting with 439de4fa63cd6df8ebf550bbedc971f32ef2892cd74ec239cbd5feceb0c8a97b not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.400321 4861 scope.go:117] "RemoveContainer" containerID="ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.406867 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf\": container with ID starting with ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf not found: ID does not exist" containerID="ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.406930 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf"} err="failed to get container status \"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf\": rpc error: code = NotFound desc = could not find container \"ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf\": container with ID starting with ad8cd63983d0c3e17993f8a5dd489855bb0d46375866f9a2ddec97fd0f615caf not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.406950 4861 scope.go:117] "RemoveContainer" containerID="74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.411647 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-64c5bb5d74-nmtmm"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.424038 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca9800e2-ceed-4197-90f1-97d14c918e45" (UID: "ca9800e2-ceed-4197-90f1-97d14c918e45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.434186 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.445727 4861 scope.go:117] "RemoveContainer" containerID="0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.447169 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data" (OuterVolumeSpecName: "config-data") pod "ca9800e2-ceed-4197-90f1-97d14c918e45" (UID: "ca9800e2-ceed-4197-90f1-97d14c918e45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.472060 4861 scope.go:117] "RemoveContainer" containerID="74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.472546 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729\": container with ID starting with 74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729 not found: ID does not exist" containerID="74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.472580 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729"} err="failed to get container status \"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729\": rpc error: code = NotFound desc = could not find container \"74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729\": container with ID starting with 74e14b5c097a70640577953cf9b0014e196f222e4e500727664862cc2d4d4729 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.472621 4861 scope.go:117] "RemoveContainer" containerID="0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.473020 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8\": container with ID starting with 0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8 not found: ID does not exist" containerID="0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.473060 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8"} err="failed to get container status \"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8\": rpc error: code = NotFound desc = could not find container \"0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8\": container with ID starting with 0d0a47976272498f87196b55501f579fb4bfa276b2027267e2ac56b2aa3530a8 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.473085 4861 scope.go:117] "RemoveContainer" containerID="0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.485319 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.487093 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data" (OuterVolumeSpecName: "config-data") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.501098 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.501148 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.501158 4861 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.501170 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.512580 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c1ba054-6941-4e52-b792-250287f25d92/ovn-northd/0.log" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.512645 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.528543 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.539716 4861 scope.go:117] "RemoveContainer" containerID="90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.550653 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.551454 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data" (OuterVolumeSpecName: "config-data") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.558422 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.568393 4861 scope.go:117] "RemoveContainer" containerID="0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.568465 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ca9800e2-ceed-4197-90f1-97d14c918e45" (UID: "ca9800e2-ceed-4197-90f1-97d14c918e45"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.568850 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2\": container with ID starting with 0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2 not found: ID does not exist" containerID="0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.568898 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2"} err="failed to get container status \"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2\": rpc error: code = NotFound desc = could not find container \"0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2\": container with ID starting with 0a745593b95faccead341e781c555d776ec808c970f5a9b91a652ebcebc1c8a2 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.568920 4861 scope.go:117] "RemoveContainer" containerID="90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.569160 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44\": container with ID starting with 90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44 not found: ID does not exist" containerID="90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.569200 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44"} err="failed to get container status \"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44\": rpc error: code = NotFound desc = could not find container \"90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44\": container with ID starting with 90b5b8cd5ef23b88335ebf01c5c45b9491d2c08feb5b8b48a1120b2d60cb9a44 not found: ID does not exist" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.574605 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a31600f9-88a4-4ecb-8da3-84c966bf4a63" (UID: "a31600f9-88a4-4ecb-8da3-84c966bf4a63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.591878 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" (UID: "79b1b96d-52e3-4a16-8fc1-d09188b5ebc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.601872 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.601919 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602001 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602030 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data\") pod \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs\") pod \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602098 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602152 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config\") pod \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602237 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602269 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjqgx\" (UniqueName: \"kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx\") pod \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602303 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle\") pod \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\" (UID: \"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602326 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75gwj\" (UniqueName: \"kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj\") pod \"8c1ba054-6941-4e52-b792-250287f25d92\" (UID: \"8c1ba054-6941-4e52-b792-250287f25d92\") " Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602659 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602677 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.602686 4861 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.604281 4861 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca9800e2-ceed-4197-90f1-97d14c918e45-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.604293 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.604302 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.604312 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31600f9-88a4-4ecb-8da3-84c966bf4a63-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.603841 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data" (OuterVolumeSpecName: "config-data") pod "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" (UID: "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.603910 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" (UID: "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.603978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config" (OuterVolumeSpecName: "config") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.604231 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.606686 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts" (OuterVolumeSpecName: "scripts") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.622005 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj" (OuterVolumeSpecName: "kube-api-access-75gwj") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "kube-api-access-75gwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.634883 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx" (OuterVolumeSpecName: "kube-api-access-fjqgx") pod "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" (UID: "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d"). InnerVolumeSpecName "kube-api-access-fjqgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.670459 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.674353 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" (UID: "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.694799 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" (UID: "41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.706923 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mppg7\" (UniqueName: \"kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.706978 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts\") pod \"keystone-6638-account-create-update-fqljq\" (UID: \"c6c9fd69-da59-4e5d-9630-32a42c1dc30e\") " pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707102 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707117 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjqgx\" (UniqueName: \"kubernetes.io/projected/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kube-api-access-fjqgx\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707127 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707137 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75gwj\" (UniqueName: \"kubernetes.io/projected/8c1ba054-6941-4e52-b792-250287f25d92-kube-api-access-75gwj\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707146 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707155 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707165 4861 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707173 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c1ba054-6941-4e52-b792-250287f25d92-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707181 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ba054-6941-4e52-b792-250287f25d92-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.707189 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.707264 4861 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.707307 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:05.707294087 +0000 UTC m=+1589.470730047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : configmap "openstack-scripts" not found Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.709640 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.709696 4861 projected.go:194] Error preparing data for projected volume kube-api-access-mppg7 for pod openstack/keystone-6638-account-create-update-fqljq: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:03 crc kubenswrapper[4861]: E0310 19:14:03.709755 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7 podName:c6c9fd69-da59-4e5d-9630-32a42c1dc30e nodeName:}" failed. No retries permitted until 2026-03-10 19:14:05.709743294 +0000 UTC m=+1589.473179254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mppg7" (UniqueName: "kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7") pod "keystone-6638-account-create-update-fqljq" (UID: "c6c9fd69-da59-4e5d-9630-32a42c1dc30e") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.712820 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8c1ba054-6941-4e52-b792-250287f25d92" (UID: "8c1ba054-6941-4e52-b792-250287f25d92"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.808172 4861 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.808198 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1ba054-6941-4e52-b792-250287f25d92-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.837016 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"930df8f4-7ebf-4425-976f-4f52654586bb","Type":"ContainerDied","Data":"31f981a27b194e4e5e581abce58d2e5a9ac328e448f106ef98ce5902d35a30b9"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.837081 4861 scope.go:117] "RemoveContainer" containerID="6b7111851cf4f58c8e796e6b95e77ca3422cc025ae5304753172c6a1675a0fd1" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.837233 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.847831 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a31600f9-88a4-4ecb-8da3-84c966bf4a63","Type":"ContainerDied","Data":"61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.847898 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.850756 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-495qf" event={"ID":"cc36f34f-6f91-4076-bc3e-71e2fc0e797e","Type":"ContainerStarted","Data":"1458fb57f7890646222625589f0d8e6369b4545bc2a878e4426093d6a8d13900"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.859764 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c1ba054-6941-4e52-b792-250287f25d92/ovn-northd/0.log" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.859829 4861 generic.go:334] "Generic (PLEG): container finished" podID="8c1ba054-6941-4e52-b792-250287f25d92" containerID="454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8" exitCode=139 Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.859938 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerDied","Data":"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.859990 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c1ba054-6941-4e52-b792-250287f25d92","Type":"ContainerDied","Data":"2cc47d55fa9b32dc5f059325991ac722d74e10227789869c3cdcd5c79c8737e8"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.859953 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.926499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d","Type":"ContainerDied","Data":"2ee5d55f504e4dc95112ec49dcf7ce204b8fe01a76fea22f469ef08561767e35"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.926616 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.934683 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.940600 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.963776 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.971038 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.975558 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.976027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca9800e2-ceed-4197-90f1-97d14c918e45","Type":"ContainerDied","Data":"b40bb1f19fd687f7b78f82784aff6102abc02b389ec7d9052a7fd91d08f29c67"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.976122 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.980305 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.982716 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d84cf8948-mg4jb" event={"ID":"79b1b96d-52e3-4a16-8fc1-d09188b5ebc1","Type":"ContainerDied","Data":"61337d83a0befc3df80253c2d54f15f756dbf9bc38f9e6b7beddf7d2c9e3e4d6"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.982819 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d84cf8948-mg4jb" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.984541 4861 scope.go:117] "RemoveContainer" containerID="6eb92aeb41c6749ec876f80e99358ba49b19a270c11ada403487be3ca0ef76c4" Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.986901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" event={"ID":"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c","Type":"ContainerStarted","Data":"451f09d0b8ba6e81751ef4eea6bf40678dba1a95fba42d8522240189199186fe"} Mar 10 19:14:03 crc kubenswrapper[4861]: I0310 19:14:03.988412 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6638-account-create-update-fqljq" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.023249 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.053939 4861 scope.go:117] "RemoveContainer" containerID="ae3d77023ba112fc99fe18d4758acbc6d9046be7d58409111c17e57dab6ae470" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.055478 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.069635 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" podStartSLOduration=2.472412427 podStartE2EDuration="4.069611947s" podCreationTimestamp="2026-03-10 19:14:00 +0000 UTC" firstStartedPulling="2026-03-10 19:14:01.694857094 +0000 UTC m=+1585.458293054" lastFinishedPulling="2026-03-10 19:14:03.292056614 +0000 UTC m=+1587.055492574" observedRunningTime="2026-03-10 19:14:04.023139344 +0000 UTC m=+1587.786575304" watchObservedRunningTime="2026-03-10 19:14:04.069611947 +0000 UTC m=+1587.833047907" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.084962 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.095878 4861 scope.go:117] "RemoveContainer" containerID="4fdb51d57efd04be1d17c090ac4d170f60279b658b8137c8e7edb527c392d4ce" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.098400 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.105404 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.110633 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d84cf8948-mg4jb"] Mar 10 19:14:04 crc kubenswrapper[4861]: E0310 19:14:04.112392 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41fdd1f2_c0e5_4dbe_aa18_bd6dd1a4754d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c9fd69_da59_4e5d_9630_32a42c1dc30e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda31600f9_88a4_4ecb_8da3_84c966bf4a63.slice/crio-61ed28983ea4172cf73482e670880b767c839c4554b7f2f840bf3c904372879c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca9800e2_ceed_4197_90f1_97d14c918e45.slice\": RecentStats: unable to find data in memory cache]" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.128651 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6638-account-create-update-fqljq"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.155969 4861 scope.go:117] "RemoveContainer" containerID="20d0f039122c6a0d6f5c8792e4c4a4a041ef8e480ba4dd21d575218242541879" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.156227 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6638-account-create-update-fqljq"] Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.173976 4861 scope.go:117] "RemoveContainer" containerID="6c09f38909e1c2592c7435cdb6107fe20a357f1ed362d43e8283dd3143d20885" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.224412 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.224440 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mppg7\" (UniqueName: \"kubernetes.io/projected/c6c9fd69-da59-4e5d-9630-32a42c1dc30e-kube-api-access-mppg7\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.228840 4861 scope.go:117] "RemoveContainer" containerID="68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.251563 4861 scope.go:117] "RemoveContainer" containerID="454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.269612 4861 scope.go:117] "RemoveContainer" containerID="68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86" Mar 10 19:14:04 crc kubenswrapper[4861]: E0310 19:14:04.270073 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86\": container with ID starting with 68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86 not found: ID does not exist" containerID="68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.270102 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86"} err="failed to get container status \"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86\": rpc error: code = NotFound desc = could not find container \"68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86\": container with ID starting with 68b9f900ee27424079091d09f2809c5e2d49434a357e98e6fdeb385cd8222d86 not found: ID does not exist" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.270122 4861 scope.go:117] "RemoveContainer" containerID="454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8" Mar 10 19:14:04 crc kubenswrapper[4861]: E0310 19:14:04.270449 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8\": container with ID starting with 454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8 not found: ID does not exist" containerID="454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.270468 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8"} err="failed to get container status \"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8\": rpc error: code = NotFound desc = could not find container \"454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8\": container with ID starting with 454e9ba506fcf1b8f98215867245437014efc66d7c6d48c37e44b356c6a592d8 not found: ID does not exist" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.270482 4861 scope.go:117] "RemoveContainer" containerID="e208585919e167041ae926b5358edd36d959771ed35af5c54fe2d53fe8efa41a" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.316448 4861 scope.go:117] "RemoveContainer" containerID="2979a50f5f190d0793c2fc18e07aa06ac371e089c3304abe6dc08d185e172174" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.343135 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-495qf" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.353649 4861 scope.go:117] "RemoveContainer" containerID="6d5f161e1ebe942db8f22888f7d4ee605ae096c419576acb0dffd5c4a5831534" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.376611 4861 scope.go:117] "RemoveContainer" containerID="93c0c7d20bdf5ba3484b74d8194ad89a2b4dad5778c0dc5d0876a0f04995cc86" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.398418 4861 scope.go:117] "RemoveContainer" containerID="4b1f746cd725920aeee99b23e4c08bf7d8c523580d3e57266ad014c8ed8e2ed0" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.432312 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts\") pod \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.432636 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnktv\" (UniqueName: \"kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv\") pod \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\" (UID: \"cc36f34f-6f91-4076-bc3e-71e2fc0e797e\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.433388 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc36f34f-6f91-4076-bc3e-71e2fc0e797e" (UID: "cc36f34f-6f91-4076-bc3e-71e2fc0e797e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.440404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv" (OuterVolumeSpecName: "kube-api-access-rnktv") pod "cc36f34f-6f91-4076-bc3e-71e2fc0e797e" (UID: "cc36f34f-6f91-4076-bc3e-71e2fc0e797e"). InnerVolumeSpecName "kube-api-access-rnktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.534719 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.534756 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnktv\" (UniqueName: \"kubernetes.io/projected/cc36f34f-6f91-4076-bc3e-71e2fc0e797e-kube-api-access-rnktv\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.617452 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.177:8080/healthcheck\": context deadline exceeded" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.617797 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6d6c7bd6d5-klrmq" podUID="97db979f-75cb-4e7e-9dc6-0c65f39fef8e" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.177:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.708939 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842214 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842375 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsvk9\" (UniqueName: \"kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842501 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842539 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842575 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842624 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842685 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.842811 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default\") pod \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\" (UID: \"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa\") " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.844099 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.844254 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.844701 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.844773 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.852631 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9" (OuterVolumeSpecName: "kube-api-access-xsvk9") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "kube-api-access-xsvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.874905 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.879947 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.896928 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" (UID: "a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948337 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948384 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948407 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948421 4861 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948433 4861 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948445 4861 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948457 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsvk9\" (UniqueName: \"kubernetes.io/projected/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-kube-api-access-xsvk9\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.948469 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.966403 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" path="/var/lib/kubelet/pods/41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.967174 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" path="/var/lib/kubelet/pods/509298b8-3d6b-4182-b989-c25c4791ce6b/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.968001 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.971960 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" path="/var/lib/kubelet/pods/79b1b96d-52e3-4a16-8fc1-d09188b5ebc1/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.972664 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" path="/var/lib/kubelet/pods/7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.973957 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1ba054-6941-4e52-b792-250287f25d92" path="/var/lib/kubelet/pods/8c1ba054-6941-4e52-b792-250287f25d92/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.974748 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9140f7c5-893a-4128-85aa-2db96537b483" path="/var/lib/kubelet/pods/9140f7c5-893a-4128-85aa-2db96537b483/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.975465 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" path="/var/lib/kubelet/pods/930df8f4-7ebf-4425-976f-4f52654586bb/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.976771 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" path="/var/lib/kubelet/pods/a31600f9-88a4-4ecb-8da3-84c966bf4a63/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.977473 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c9fd69-da59-4e5d-9630-32a42c1dc30e" path="/var/lib/kubelet/pods/c6c9fd69-da59-4e5d-9630-32a42c1dc30e/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.977896 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" path="/var/lib/kubelet/pods/ca9800e2-ceed-4197-90f1-97d14c918e45/volumes" Mar 10 19:14:04 crc kubenswrapper[4861]: I0310 19:14:04.979331 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" path="/var/lib/kubelet/pods/ef3a31e3-d3ba-4f5c-950a-1355bb61f657/volumes" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.014883 4861 generic.go:334] "Generic (PLEG): container finished" podID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerID="32235165193d6a0485898ee070ca8f7382c523ae5f2c706e831442bec3b9e031" exitCode=0 Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.014968 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerDied","Data":"32235165193d6a0485898ee070ca8f7382c523ae5f2c706e831442bec3b9e031"} Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.043883 4861 generic.go:334] "Generic (PLEG): container finished" podID="91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" containerID="451f09d0b8ba6e81751ef4eea6bf40678dba1a95fba42d8522240189199186fe" exitCode=0 Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.043973 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" event={"ID":"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c","Type":"ContainerDied","Data":"451f09d0b8ba6e81751ef4eea6bf40678dba1a95fba42d8522240189199186fe"} Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.045773 4861 generic.go:334] "Generic (PLEG): container finished" podID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerID="c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e" exitCode=0 Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.045874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerDied","Data":"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e"} Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.045908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa","Type":"ContainerDied","Data":"d57a1cf8c3616e6222bf4788727bf8702b35c72a689146bc27443a42321c57b5"} Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.045930 4861 scope.go:117] "RemoveContainer" containerID="c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.046188 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.049400 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.051415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-495qf" event={"ID":"cc36f34f-6f91-4076-bc3e-71e2fc0e797e","Type":"ContainerDied","Data":"1458fb57f7890646222625589f0d8e6369b4545bc2a878e4426093d6a8d13900"} Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.051590 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-495qf" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.091577 4861 scope.go:117] "RemoveContainer" containerID="e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.101749 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.116315 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-495qf"] Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.123154 4861 scope.go:117] "RemoveContainer" containerID="c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e" Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.126197 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e\": container with ID starting with c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e not found: ID does not exist" containerID="c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.126464 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e"} err="failed to get container status \"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e\": rpc error: code = NotFound desc = could not find container \"c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e\": container with ID starting with c6f99dc7de3f814aa99b28dc2642760e573b793380b5e139e7ad48b602edda7e not found: ID does not exist" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.126614 4861 scope.go:117] "RemoveContainer" containerID="e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1" Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.127096 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1\": container with ID starting with e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1 not found: ID does not exist" containerID="e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.127145 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1"} err="failed to get container status \"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1\": rpc error: code = NotFound desc = could not find container \"e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1\": container with ID starting with e73a58f4d035911e32cee3d7d02558931bf510fafad2ff8b6f525e7198721cb1 not found: ID does not exist" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.128584 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.138878 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.150848 4861 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.150994 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data podName:9fa4a97d-682a-40eb-93e0-5f5167ddb0a0 nodeName:}" failed. No retries permitted until 2026-03-10 19:14:13.150978565 +0000 UTC m=+1596.914414525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data") pod "rabbitmq-cell1-server-0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0") : configmap "rabbitmq-cell1-config-data" not found Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.208829 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.213135 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.221035 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.221086 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.319503 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.454178 4861 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 10 19:14:05 crc kubenswrapper[4861]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-10T19:13:59Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 10 19:14:05 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 10 19:14:05 crc kubenswrapper[4861]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-zvlgw" message=< Mar 10 19:14:05 crc kubenswrapper[4861]: Exiting ovn-controller (1) [FAILED] Mar 10 19:14:05 crc kubenswrapper[4861]: Killing ovn-controller (1) [ OK ] Mar 10 19:14:05 crc kubenswrapper[4861]: 2026-03-10T19:13:59Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 10 19:14:05 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 10 19:14:05 crc kubenswrapper[4861]: > Mar 10 19:14:05 crc kubenswrapper[4861]: E0310 19:14:05.454211 4861 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 10 19:14:05 crc kubenswrapper[4861]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-10T19:13:59Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 10 19:14:05 crc kubenswrapper[4861]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 10 19:14:05 crc kubenswrapper[4861]: > pod="openstack/ovn-controller-zvlgw" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" containerID="cri-o://0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.454248 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-zvlgw" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" containerID="cri-o://0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" gracePeriod=23 Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.455978 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456139 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456207 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456243 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhrlg\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456291 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456338 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456379 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456402 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456442 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.456475 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\" (UID: \"0ba95f55-3cea-4f0b-8f09-c6b4027789f8\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.457957 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.458309 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.459091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.462485 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.468205 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg" (OuterVolumeSpecName: "kube-api-access-lhrlg") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "kube-api-access-lhrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.469630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.490697 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.498722 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.502837 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.522454 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data" (OuterVolumeSpecName: "config-data") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.555125 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559591 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-pod-info\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559614 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559642 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559673 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-server-conf\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559683 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559693 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhrlg\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-kube-api-access-lhrlg\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559701 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559721 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559729 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.559736 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.576534 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.587608 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ba95f55-3cea-4f0b-8f09-c6b4027789f8" (UID: "0ba95f55-3cea-4f0b-8f09-c6b4027789f8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660543 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660647 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgl6\" (UniqueName: \"kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660831 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660857 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660904 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660925 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.660941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys\") pod \"fb082653-4ce1-4696-b6fb-e6af12109812\" (UID: \"fb082653-4ce1-4696-b6fb-e6af12109812\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.661241 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.661257 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ba95f55-3cea-4f0b-8f09-c6b4027789f8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.663904 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts" (OuterVolumeSpecName: "scripts") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.664966 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.665010 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6" (OuterVolumeSpecName: "kube-api-access-gzgl6") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "kube-api-access-gzgl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.668107 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.685379 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.700953 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data" (OuterVolumeSpecName: "config-data") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.714327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.742863 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fb082653-4ce1-4696-b6fb-e6af12109812" (UID: "fb082653-4ce1-4696-b6fb-e6af12109812"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762342 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762390 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzgl6\" (UniqueName: \"kubernetes.io/projected/fb082653-4ce1-4696-b6fb-e6af12109812-kube-api-access-gzgl6\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762406 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762419 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762432 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762443 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762456 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.762468 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb082653-4ce1-4696-b6fb-e6af12109812-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.782400 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863694 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863821 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863851 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863889 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q5nk\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863913 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863943 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.863970 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.864063 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.864087 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.864112 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.864136 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie\") pod \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\" (UID: \"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0\") " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.864999 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.865622 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.869904 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.870013 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info" (OuterVolumeSpecName: "pod-info") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.870092 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk" (OuterVolumeSpecName: "kube-api-access-7q5nk") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "kube-api-access-7q5nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.870368 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.870827 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.880083 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data" (OuterVolumeSpecName: "config-data") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.881940 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.914280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf" (OuterVolumeSpecName: "server-conf") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.945183 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" (UID: "9fa4a97d-682a-40eb-93e0-5f5167ddb0a0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965230 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965268 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965310 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965325 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965337 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q5nk\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-kube-api-access-7q5nk\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965348 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965359 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965369 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-server-conf\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965380 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-pod-info\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965390 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.965401 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.981337 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.984343 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zvlgw_726cec08-5661-4b62-8a44-028b015119e4/ovn-controller/0.log" Mar 10 19:14:05 crc kubenswrapper[4861]: I0310 19:14:05.984420 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.062575 4861 generic.go:334] "Generic (PLEG): container finished" podID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerID="66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac" exitCode=0 Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.062614 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerDied","Data":"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.062673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9fa4a97d-682a-40eb-93e0-5f5167ddb0a0","Type":"ContainerDied","Data":"549b58ce7d4f3883bb5009154ba5810253e897aaa8b7405cbf0c188e6b32d565"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.062692 4861 scope.go:117] "RemoveContainer" containerID="66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.062700 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.065518 4861 generic.go:334] "Generic (PLEG): container finished" podID="fb082653-4ce1-4696-b6fb-e6af12109812" containerID="f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29" exitCode=0 Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.065550 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75ff4ff987-k4jks" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.065581 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ff4ff987-k4jks" event={"ID":"fb082653-4ce1-4696-b6fb-e6af12109812","Type":"ContainerDied","Data":"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.065598 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75ff4ff987-k4jks" event={"ID":"fb082653-4ce1-4696-b6fb-e6af12109812","Type":"ContainerDied","Data":"5d3daf0ab59775482559c39acb798c5253548a77d6f5e862ff7ae676a64f9ff5"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.066925 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067009 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067042 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067078 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067115 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067177 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067215 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kwpj\" (UniqueName: \"kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj\") pod \"726cec08-5661-4b62-8a44-028b015119e4\" (UID: \"726cec08-5661-4b62-8a44-028b015119e4\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067231 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067280 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067326 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run" (OuterVolumeSpecName: "var-run") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067895 4861 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067909 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067918 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.067927 4861 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/726cec08-5661-4b62-8a44-028b015119e4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.068197 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts" (OuterVolumeSpecName: "scripts") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.070877 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ba95f55-3cea-4f0b-8f09-c6b4027789f8","Type":"ContainerDied","Data":"efa6be39b016a0cada00444a802197641418fc51b79619b8bd800a40a8fb12ec"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.070971 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.071486 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj" (OuterVolumeSpecName: "kube-api-access-6kwpj") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "kube-api-access-6kwpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.073200 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zvlgw_726cec08-5661-4b62-8a44-028b015119e4/ovn-controller/0.log" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.073235 4861 generic.go:334] "Generic (PLEG): container finished" podID="726cec08-5661-4b62-8a44-028b015119e4" containerID="0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" exitCode=139 Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.073383 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvlgw" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.073575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw" event={"ID":"726cec08-5661-4b62-8a44-028b015119e4","Type":"ContainerDied","Data":"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.073602 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvlgw" event={"ID":"726cec08-5661-4b62-8a44-028b015119e4","Type":"ContainerDied","Data":"8eaff57e28da9ce732afb2c632b2cda4683b7db5fc5eb375d5d59e8e4a189f40"} Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.086373 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.101151 4861 scope.go:117] "RemoveContainer" containerID="95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.142681 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "726cec08-5661-4b62-8a44-028b015119e4" (UID: "726cec08-5661-4b62-8a44-028b015119e4"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.174278 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/726cec08-5661-4b62-8a44-028b015119e4-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.174326 4861 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.174345 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726cec08-5661-4b62-8a44-028b015119e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.174365 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kwpj\" (UniqueName: \"kubernetes.io/projected/726cec08-5661-4b62-8a44-028b015119e4-kube-api-access-6kwpj\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.176818 4861 scope.go:117] "RemoveContainer" containerID="66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac" Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.180586 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac\": container with ID starting with 66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac not found: ID does not exist" containerID="66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.180646 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac"} err="failed to get container status \"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac\": rpc error: code = NotFound desc = could not find container \"66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac\": container with ID starting with 66a047f0fd7c331f85d8ce9716247a29e27e73b178644e553271a5aa131cdbac not found: ID does not exist" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.180682 4861 scope.go:117] "RemoveContainer" containerID="95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899" Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.181183 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899\": container with ID starting with 95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899 not found: ID does not exist" containerID="95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.181229 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899"} err="failed to get container status \"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899\": rpc error: code = NotFound desc = could not find container \"95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899\": container with ID starting with 95f08c47746d5695816808d7eca3f02837aedc052d446257d2df3f9e5efa4899 not found: ID does not exist" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.181256 4861 scope.go:117] "RemoveContainer" containerID="f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.204469 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.232277 4861 scope.go:117] "RemoveContainer" containerID="f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29" Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.235101 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29\": container with ID starting with f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29 not found: ID does not exist" containerID="f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.235148 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29"} err="failed to get container status \"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29\": rpc error: code = NotFound desc = could not find container \"f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29\": container with ID starting with f76d7260b09a019f1b1946fc0f28f65bac1ccd091168cb03b683df54b231ae29 not found: ID does not exist" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.235171 4861 scope.go:117] "RemoveContainer" containerID="32235165193d6a0485898ee070ca8f7382c523ae5f2c706e831442bec3b9e031" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.238251 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-75ff4ff987-k4jks"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.250110 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.260630 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.270971 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.275421 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.275856 4861 scope.go:117] "RemoveContainer" containerID="6d99d0616ecc46c18b64d18128318acd7044610e3f4e749ae04d9ed4478404a9" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.369335 4861 scope.go:117] "RemoveContainer" containerID="0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.407613 4861 scope.go:117] "RemoveContainer" containerID="0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.407956 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.409381 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.413365 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zvlgw"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.414386 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930\": container with ID starting with 0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930 not found: ID does not exist" containerID="0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.414416 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930"} err="failed to get container status \"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930\": rpc error: code = NotFound desc = could not find container \"0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930\": container with ID starting with 0a5e5aa6cf14c251a6b1ce301e42515d655ffa02b6bf032ecb113664146e1930 not found: ID does not exist" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.477178 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99vp\" (UniqueName: \"kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp\") pod \"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c\" (UID: \"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.484357 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp" (OuterVolumeSpecName: "kube-api-access-d99vp") pod "91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" (UID: "91e5ef26-1440-4ffb-84d9-2bc5e0bed45c"). InnerVolumeSpecName "kube-api-access-d99vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.563538 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.578635 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99vp\" (UniqueName: \"kubernetes.io/projected/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c-kube-api-access-d99vp\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.583682 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.584388 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.584734 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.584795 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.584828 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.586904 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.588115 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:06 crc kubenswrapper[4861]: E0310 19:14:06.588164 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.679642 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle\") pod \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.679756 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data\") pod \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.679926 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsk4s\" (UniqueName: \"kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s\") pod \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\" (UID: \"36bd8cd3-7b2c-45fb-b171-aa2884df4e98\") " Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.684219 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s" (OuterVolumeSpecName: "kube-api-access-qsk4s") pod "36bd8cd3-7b2c-45fb-b171-aa2884df4e98" (UID: "36bd8cd3-7b2c-45fb-b171-aa2884df4e98"). InnerVolumeSpecName "kube-api-access-qsk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.699475 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data" (OuterVolumeSpecName: "config-data") pod "36bd8cd3-7b2c-45fb-b171-aa2884df4e98" (UID: "36bd8cd3-7b2c-45fb-b171-aa2884df4e98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.712946 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36bd8cd3-7b2c-45fb-b171-aa2884df4e98" (UID: "36bd8cd3-7b2c-45fb-b171-aa2884df4e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.781573 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsk4s\" (UniqueName: \"kubernetes.io/projected/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-kube-api-access-qsk4s\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.781614 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.781628 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bd8cd3-7b2c-45fb-b171-aa2884df4e98-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.971240 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" path="/var/lib/kubelet/pods/0ba95f55-3cea-4f0b-8f09-c6b4027789f8/volumes" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.971999 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726cec08-5661-4b62-8a44-028b015119e4" path="/var/lib/kubelet/pods/726cec08-5661-4b62-8a44-028b015119e4/volumes" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.973254 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" path="/var/lib/kubelet/pods/9fa4a97d-682a-40eb-93e0-5f5167ddb0a0/volumes" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.973948 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" path="/var/lib/kubelet/pods/a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa/volumes" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.974499 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc36f34f-6f91-4076-bc3e-71e2fc0e797e" path="/var/lib/kubelet/pods/cc36f34f-6f91-4076-bc3e-71e2fc0e797e/volumes" Mar 10 19:14:06 crc kubenswrapper[4861]: I0310 19:14:06.974889 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb082653-4ce1-4696-b6fb-e6af12109812" path="/var/lib/kubelet/pods/fb082653-4ce1-4696-b6fb-e6af12109812/volumes" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.088600 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552828-8qbl9"] Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.090512 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.090868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552834-dlwsb" event={"ID":"91e5ef26-1440-4ffb-84d9-2bc5e0bed45c","Type":"ContainerDied","Data":"c755581050b6ae9a6f0cd7fb58f35e2adb255f4bd2231c7f0edfbebc13880511"} Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.090928 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c755581050b6ae9a6f0cd7fb58f35e2adb255f4bd2231c7f0edfbebc13880511" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.092298 4861 generic.go:334] "Generic (PLEG): container finished" podID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" exitCode=0 Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.092395 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36bd8cd3-7b2c-45fb-b171-aa2884df4e98","Type":"ContainerDied","Data":"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7"} Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.092465 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36bd8cd3-7b2c-45fb-b171-aa2884df4e98","Type":"ContainerDied","Data":"f9182397e118c17ba61bc1aedc5bdb71216d24446fc514c6ad177ac1a336b884"} Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.092533 4861 scope.go:117] "RemoveContainer" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.092645 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.099247 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552828-8qbl9"] Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.126952 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.130622 4861 scope.go:117] "RemoveContainer" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" Mar 10 19:14:07 crc kubenswrapper[4861]: E0310 19:14:07.131380 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7\": container with ID starting with a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7 not found: ID does not exist" containerID="a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.131429 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7"} err="failed to get container status \"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7\": rpc error: code = NotFound desc = could not find container \"a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7\": container with ID starting with a6e23144072f6812cabc16305a9cf3ff58abd4877ddf98f7f123b16f5863c1e7 not found: ID does not exist" Mar 10 19:14:07 crc kubenswrapper[4861]: I0310 19:14:07.134403 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 10 19:14:08 crc kubenswrapper[4861]: I0310 19:14:08.972804 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" path="/var/lib/kubelet/pods/36bd8cd3-7b2c-45fb-b171-aa2884df4e98/volumes" Mar 10 19:14:08 crc kubenswrapper[4861]: I0310 19:14:08.974208 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dfe3138-3c9e-401e-8b34-7c0990829691" path="/var/lib/kubelet/pods/4dfe3138-3c9e-401e-8b34-7c0990829691/volumes" Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.583331 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.583990 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.584494 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.584544 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.585114 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.586484 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.587834 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:11 crc kubenswrapper[4861]: E0310 19:14:11.587899 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.232079 4861 generic.go:334] "Generic (PLEG): container finished" podID="692615cd-3dd2-4970-9d35-63073e2403ba" containerID="c33900d456694805e616c7853b9af4d757f2fbf528a6cc4897258f4c25f58c83" exitCode=0 Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.232219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerDied","Data":"c33900d456694805e616c7853b9af4d757f2fbf528a6cc4897258f4c25f58c83"} Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.499509 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.584286 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.585570 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.586499 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.586597 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.587173 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.590801 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.594665 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:16 crc kubenswrapper[4861]: E0310 19:14:16.594755 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681051 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9648m\" (UniqueName: \"kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681188 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681372 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681434 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681546 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681605 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.681654 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs\") pod \"692615cd-3dd2-4970-9d35-63073e2403ba\" (UID: \"692615cd-3dd2-4970-9d35-63073e2403ba\") " Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.688569 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m" (OuterVolumeSpecName: "kube-api-access-9648m") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "kube-api-access-9648m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.689799 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.741567 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config" (OuterVolumeSpecName: "config") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.751731 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.754608 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.758979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.767410 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "692615cd-3dd2-4970-9d35-63073e2403ba" (UID: "692615cd-3dd2-4970-9d35-63073e2403ba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.785937 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9648m\" (UniqueName: \"kubernetes.io/projected/692615cd-3dd2-4970-9d35-63073e2403ba-kube-api-access-9648m\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.785995 4861 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.786015 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.786033 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.786050 4861 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.786068 4861 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:16 crc kubenswrapper[4861]: I0310 19:14:16.786083 4861 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/692615cd-3dd2-4970-9d35-63073e2403ba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.249090 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69987f456f-bjbjk" event={"ID":"692615cd-3dd2-4970-9d35-63073e2403ba","Type":"ContainerDied","Data":"6e062d912ba3bd3d7621ccdddf5f34b7f49c98dbafbe2c1e56e53fce35c8525d"} Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.249163 4861 scope.go:117] "RemoveContainer" containerID="8c0527061a420c2c42f4d25de783f4827f7ddac4b9a8e5486e1239c416673684" Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.249919 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69987f456f-bjbjk" Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.276039 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.284865 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-69987f456f-bjbjk"] Mar 10 19:14:17 crc kubenswrapper[4861]: I0310 19:14:17.286040 4861 scope.go:117] "RemoveContainer" containerID="c33900d456694805e616c7853b9af4d757f2fbf528a6cc4897258f4c25f58c83" Mar 10 19:14:18 crc kubenswrapper[4861]: I0310 19:14:18.975608 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" path="/var/lib/kubelet/pods/692615cd-3dd2-4970-9d35-63073e2403ba/volumes" Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.583287 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.584674 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.585602 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.585661 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.586961 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.588621 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.590993 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:21 crc kubenswrapper[4861]: E0310 19:14:21.591048 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.584599 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.586248 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.586743 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.586827 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.588608 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.591049 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.593700 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 10 19:14:26 crc kubenswrapper[4861]: E0310 19:14:26.593805 4861 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cw7x8" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.403182 4861 generic.go:334] "Generic (PLEG): container finished" podID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerID="2bd4ad2d926f4ab721bec00675970f1a24d0671a8e5f1570ec65b6917857aedb" exitCode=137 Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.403251 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"2bd4ad2d926f4ab721bec00675970f1a24d0671a8e5f1570ec65b6917857aedb"} Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.405827 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cw7x8_2f72ec66-0d64-4a5f-b1c6-17d62a735065/ovs-vswitchd/0.log" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.406562 4861 generic.go:334] "Generic (PLEG): container finished" podID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" exitCode=137 Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.406598 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerDied","Data":"3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb"} Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.479142 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cw7x8_2f72ec66-0d64-4a5f-b1c6-17d62a735065/ovs-vswitchd/0.log" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.480351 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.481570 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591466 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591537 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591569 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfvl\" (UniqueName: \"kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591632 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591663 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591683 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591729 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591772 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591812 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591837 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591872 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts\") pod \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\" (UID: \"2f72ec66-0d64-4a5f-b1c6-17d62a735065\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591939 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mz6p\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p\") pod \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\" (UID: \"04bbfc10-7f55-45a5-8a53-70e994a09bc9\") " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591970 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib" (OuterVolumeSpecName: "var-lib") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.591975 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run" (OuterVolumeSpecName: "var-run") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592056 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log" (OuterVolumeSpecName: "var-log") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592545 4861 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-log\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592585 4861 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-lib\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592601 4861 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-var-run\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592554 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock" (OuterVolumeSpecName: "lock") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.592993 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache" (OuterVolumeSpecName: "cache") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.593065 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.593403 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts" (OuterVolumeSpecName: "scripts") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.599929 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.599957 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p" (OuterVolumeSpecName: "kube-api-access-9mz6p") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "kube-api-access-9mz6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.599998 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl" (OuterVolumeSpecName: "kube-api-access-nsfvl") pod "2f72ec66-0d64-4a5f-b1c6-17d62a735065" (UID: "2f72ec66-0d64-4a5f-b1c6-17d62a735065"). InnerVolumeSpecName "kube-api-access-nsfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.604489 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694491 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f72ec66-0d64-4a5f-b1c6-17d62a735065-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694542 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mz6p\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-kube-api-access-9mz6p\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694636 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694693 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfvl\" (UniqueName: \"kubernetes.io/projected/2f72ec66-0d64-4a5f-b1c6-17d62a735065-kube-api-access-nsfvl\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694737 4861 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-lock\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694757 4861 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/04bbfc10-7f55-45a5-8a53-70e994a09bc9-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694776 4861 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f72ec66-0d64-4a5f-b1c6-17d62a735065-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.694792 4861 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/04bbfc10-7f55-45a5-8a53-70e994a09bc9-cache\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.721021 4861 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 10 19:14:28 crc kubenswrapper[4861]: I0310 19:14:28.796423 4861 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.044029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04bbfc10-7f55-45a5-8a53-70e994a09bc9" (UID: "04bbfc10-7f55-45a5-8a53-70e994a09bc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.102109 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bbfc10-7f55-45a5-8a53-70e994a09bc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.414559 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cw7x8_2f72ec66-0d64-4a5f-b1c6-17d62a735065/ovs-vswitchd/0.log" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.415406 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cw7x8" event={"ID":"2f72ec66-0d64-4a5f-b1c6-17d62a735065","Type":"ContainerDied","Data":"a2ee8caa35388a3dd3f25affde057556c4b2eaa817152c08ef7ee1c73dd4be03"} Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.415458 4861 scope.go:117] "RemoveContainer" containerID="3b23a95a1a463eb82f2475fdcea1a6070b6a64b1c3a03f9d3000d377b6e7bebb" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.415633 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cw7x8" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.438188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"04bbfc10-7f55-45a5-8a53-70e994a09bc9","Type":"ContainerDied","Data":"d51f9d05fd5eeb22e0ba24dfab305bb12907b0352462df401de8b75db408e3c4"} Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.438390 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.442108 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.447158 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-cw7x8"] Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.448665 4861 scope.go:117] "RemoveContainer" containerID="ab5fdb8f04b86479e7de65efd45e1b3abeeeef9346b04a04ab27c54b7f1b81fc" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.478303 4861 scope.go:117] "RemoveContainer" containerID="594bf60bf37e70def6a72f43c087341154de5553af17dbc6db5e3efddc38f18f" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.489217 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.499617 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.519185 4861 scope.go:117] "RemoveContainer" containerID="11a99a6a7e0db5891a44034177a04d1985fa20e0f2d3290918a14ddd4959f420" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.540214 4861 scope.go:117] "RemoveContainer" containerID="55fdaa1d03e36a25dbe23991f3c3f1f316d93446c624fccb0f36484e914ef862" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.565482 4861 scope.go:117] "RemoveContainer" containerID="2454ccf10166b5528edb6a3e86ffa92bbe7f052583e5e86477cc4d4a7bbd47cb" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.594417 4861 scope.go:117] "RemoveContainer" containerID="63a9bdbdc82026e9332fbae1efef4878f557997e81cdaf15c41418eb885ff288" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.619157 4861 scope.go:117] "RemoveContainer" containerID="2bd4ad2d926f4ab721bec00675970f1a24d0671a8e5f1570ec65b6917857aedb" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.651641 4861 scope.go:117] "RemoveContainer" containerID="73f6971b51316dd9e12239ff145daa5bc5b88b45caa9a90726d3eeb744b9fcb6" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.681231 4861 scope.go:117] "RemoveContainer" containerID="bef7cce63ec465afa0014e811d86aceb3dd6ecfcc6a1a0b0c73a257f27213042" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.713386 4861 scope.go:117] "RemoveContainer" containerID="66a2a9c7ab44445d4eeb595e1526f88c8cdbc26a0ff3fdd9ff021d8d32a4a982" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.741662 4861 scope.go:117] "RemoveContainer" containerID="f9e781e035bf250fe3ee13abaf159d7c55149ddf165cabef696cdc1f4ec625ad" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.768910 4861 scope.go:117] "RemoveContainer" containerID="5da060e6ca296ed684fc7e74fb2eb9b5f8999f47393abd75eb45df33aa0e5f1d" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.799801 4861 scope.go:117] "RemoveContainer" containerID="43df18a6de05237d4ac2005dc34fa3948f76b3a87364c4a8cea9cbd0359fd444" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.836442 4861 scope.go:117] "RemoveContainer" containerID="ab8c0e84aed350a4ab30debf28ea1e963b8e3e21aaeb08b63288cae676be3729" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.895627 4861 scope.go:117] "RemoveContainer" containerID="97b623396baf8ceeff4e0ed2a9e396771f9bf11b23a843a8a998619ba053f542" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.926737 4861 scope.go:117] "RemoveContainer" containerID="d2ff3b6b89cfd7c1dc305fb22ed53d7ed15f44f949992d564c643552f9f8274d" Mar 10 19:14:29 crc kubenswrapper[4861]: I0310 19:14:29.955890 4861 scope.go:117] "RemoveContainer" containerID="b3ade23dd33552771452468d0c1f332e38b2c8795245f8a42e5b5c4e2ed70338" Mar 10 19:14:30 crc kubenswrapper[4861]: I0310 19:14:30.974117 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" path="/var/lib/kubelet/pods/04bbfc10-7f55-45a5-8a53-70e994a09bc9/volumes" Mar 10 19:14:30 crc kubenswrapper[4861]: I0310 19:14:30.978819 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" path="/var/lib/kubelet/pods/2f72ec66-0d64-4a5f-b1c6-17d62a735065/volumes" Mar 10 19:14:33 crc kubenswrapper[4861]: I0310 19:14:33.201595 4861 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9140f7c5-893a-4128-85aa-2db96537b483"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9140f7c5-893a-4128-85aa-2db96537b483] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9140f7c5_893a_4128_85aa_2db96537b483.slice" Mar 10 19:14:33 crc kubenswrapper[4861]: I0310 19:14:33.235080 4861 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8d86917a-2e89-4e29-a1f2-673b0afbf27a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8d86917a-2e89-4e29-a1f2-673b0afbf27a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8d86917a_2e89_4e29_a1f2_673b0afbf27a.slice" Mar 10 19:14:33 crc kubenswrapper[4861]: E0310 19:14:33.235145 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8d86917a-2e89-4e29-a1f2-673b0afbf27a] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8d86917a-2e89-4e29-a1f2-673b0afbf27a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8d86917a_2e89_4e29_a1f2_673b0afbf27a.slice" pod="openstack/cinder-api-0" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" Mar 10 19:14:33 crc kubenswrapper[4861]: I0310 19:14:33.496751 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 10 19:14:33 crc kubenswrapper[4861]: I0310 19:14:33.532541 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:14:33 crc kubenswrapper[4861]: I0310 19:14:33.539580 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 10 19:14:34 crc kubenswrapper[4861]: I0310 19:14:34.975211 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" path="/var/lib/kubelet/pods/8d86917a-2e89-4e29-a1f2-673b0afbf27a/volumes" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.869596 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870184 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870199 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870219 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870228 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870242 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870251 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870267 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-expirer" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870277 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-expirer" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870288 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870296 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-server" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870305 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870313 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870329 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870337 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870349 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" containerName="memcached" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870359 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" containerName="memcached" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870368 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870376 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870385 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="sg-core" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870393 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="sg-core" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870405 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870413 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870422 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870431 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870440 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870447 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870459 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870467 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870483 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870491 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-server" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870501 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870511 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870524 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870532 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-server" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870544 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="proxy-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870552 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="proxy-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870562 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="setup-container" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870569 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="setup-container" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870579 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870587 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870604 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server-init" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870612 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server-init" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870624 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-reaper" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870632 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-reaper" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870643 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870651 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870663 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870671 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870682 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870690 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870701 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" containerName="kube-state-metrics" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870729 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" containerName="kube-state-metrics" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870743 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870750 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870762 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870770 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870785 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870795 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870808 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870816 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870829 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870838 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870847 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870855 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870866 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870873 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870887 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb082653-4ce1-4696-b6fb-e6af12109812" containerName="keystone-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870896 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb082653-4ce1-4696-b6fb-e6af12109812" containerName="keystone-api" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870909 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="ovn-northd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870917 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="ovn-northd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870929 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870937 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870951 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870959 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870972 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="mysql-bootstrap" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870979 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="mysql-bootstrap" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.870991 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="rsync" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.870998 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="rsync" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871016 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871024 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871035 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871044 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871067 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871077 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="galera" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871085 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="galera" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871099 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871106 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871121 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" containerName="oc" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871129 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" containerName="oc" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871140 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="setup-container" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871148 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="setup-container" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871158 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871167 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871182 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-notification-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871190 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-notification-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871201 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871209 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871220 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="openstack-network-exporter" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871229 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="openstack-network-exporter" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871239 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="swift-recon-cron" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871247 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="swift-recon-cron" Mar 10 19:14:36 crc kubenswrapper[4861]: E0310 19:14:36.871258 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-central-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871266 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-central-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871415 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871431 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b9c71c-6fe7-4eb9-9f95-87fce0f70caa" containerName="galera" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871440 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871450 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovsdb-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871461 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bd8cd3-7b2c-45fb-b171-aa2884df4e98" containerName="nova-scheduler-scheduler" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871472 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871484 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="proxy-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871493 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871506 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871518 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871531 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871541 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9800e2-ceed-4197-90f1-97d14c918e45" containerName="nova-metadata-metadata" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871555 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="930df8f4-7ebf-4425-976f-4f52654586bb" containerName="nova-api-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871567 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871576 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="openstack-network-exporter" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871584 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa4a97d-682a-40eb-93e0-5f5167ddb0a0" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871601 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871614 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd47c3f-4d27-4dae-bac5-a6f81ce01cd2" containerName="kube-state-metrics" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871625 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871634 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871646 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d86917a-2e89-4e29-a1f2-673b0afbf27a" containerName="cinder-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871659 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871667 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-central-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871681 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b1b96d-52e3-4a16-8fc1-d09188b5ebc1" containerName="barbican-api-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871690 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871702 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-expirer" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871735 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="ceilometer-notification-agent" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871748 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" containerName="oc" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871761 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="rsync" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871775 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-reaper" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871789 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871801 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3a31e3-d3ba-4f5c-950a-1355bb61f657" containerName="glance-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871812 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb082653-4ce1-4696-b6fb-e6af12109812" containerName="keystone-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871823 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="account-replicator" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871833 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-updater" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871844 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9140f7c5-893a-4128-85aa-2db96537b483" containerName="placement-log" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871855 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f72ec66-0d64-4a5f-b1c6-17d62a735065" containerName="ovs-vswitchd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871866 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="object-auditor" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871877 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-api" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871887 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="509298b8-3d6b-4182-b989-c25c4791ce6b" containerName="glance-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871903 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="container-server" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871914 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1ba054-6941-4e52-b792-250287f25d92" containerName="ovn-northd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871925 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba95f55-3cea-4f0b-8f09-c6b4027789f8" containerName="rabbitmq" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871940 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fdd1f2-c0e5-4dbe-aa18-bd6dd1a4754d" containerName="memcached" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871953 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31600f9-88a4-4ecb-8da3-84c966bf4a63" containerName="sg-core" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871963 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbfc10-7f55-45a5-8a53-70e994a09bc9" containerName="swift-recon-cron" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871974 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="726cec08-5661-4b62-8a44-028b015119e4" containerName="ovn-controller" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.871988 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="692615cd-3dd2-4970-9d35-63073e2403ba" containerName="neutron-httpd" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.873177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:36 crc kubenswrapper[4861]: I0310 19:14:36.889858 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.028404 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.028459 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rbw\" (UniqueName: \"kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.028625 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.130340 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.131031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.131309 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.131660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rbw\" (UniqueName: \"kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.131878 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.166626 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rbw\" (UniqueName: \"kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw\") pod \"redhat-marketplace-6vlfr\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.187782 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.521764 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:37 crc kubenswrapper[4861]: W0310 19:14:37.531910 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b686e9f_336a_4780_bccd_f84cb2dc6f95.slice/crio-523eda261fcd9bddf799f3db3eb271640320243704e27302fc7ec5d7a5ce4fcc WatchSource:0}: Error finding container 523eda261fcd9bddf799f3db3eb271640320243704e27302fc7ec5d7a5ce4fcc: Status 404 returned error can't find the container with id 523eda261fcd9bddf799f3db3eb271640320243704e27302fc7ec5d7a5ce4fcc Mar 10 19:14:37 crc kubenswrapper[4861]: I0310 19:14:37.562765 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerStarted","Data":"523eda261fcd9bddf799f3db3eb271640320243704e27302fc7ec5d7a5ce4fcc"} Mar 10 19:14:38 crc kubenswrapper[4861]: I0310 19:14:38.576974 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerID="0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa" exitCode=0 Mar 10 19:14:38 crc kubenswrapper[4861]: I0310 19:14:38.577091 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerDied","Data":"0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa"} Mar 10 19:14:39 crc kubenswrapper[4861]: I0310 19:14:39.590362 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerStarted","Data":"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb"} Mar 10 19:14:40 crc kubenswrapper[4861]: I0310 19:14:40.605860 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerID="d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb" exitCode=0 Mar 10 19:14:40 crc kubenswrapper[4861]: I0310 19:14:40.605947 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerDied","Data":"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb"} Mar 10 19:14:41 crc kubenswrapper[4861]: I0310 19:14:41.618159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerStarted","Data":"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728"} Mar 10 19:14:41 crc kubenswrapper[4861]: I0310 19:14:41.641766 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vlfr" podStartSLOduration=3.175782908 podStartE2EDuration="5.641738182s" podCreationTimestamp="2026-03-10 19:14:36 +0000 UTC" firstStartedPulling="2026-03-10 19:14:38.579802512 +0000 UTC m=+1622.343238502" lastFinishedPulling="2026-03-10 19:14:41.045757776 +0000 UTC m=+1624.809193776" observedRunningTime="2026-03-10 19:14:41.637806624 +0000 UTC m=+1625.401242674" watchObservedRunningTime="2026-03-10 19:14:41.641738182 +0000 UTC m=+1625.405174162" Mar 10 19:14:47 crc kubenswrapper[4861]: I0310 19:14:47.188787 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:47 crc kubenswrapper[4861]: I0310 19:14:47.190650 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:47 crc kubenswrapper[4861]: I0310 19:14:47.294464 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:47 crc kubenswrapper[4861]: I0310 19:14:47.761227 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:47 crc kubenswrapper[4861]: I0310 19:14:47.836302 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:49 crc kubenswrapper[4861]: I0310 19:14:49.711473 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vlfr" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="registry-server" containerID="cri-o://eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728" gracePeriod=2 Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.110558 4861 scope.go:117] "RemoveContainer" containerID="204dd70fb98d293a2d4953f3f39ed6ce4896a345296694989ea6431bab405965" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.248966 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.356400 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rbw\" (UniqueName: \"kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw\") pod \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.356934 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities\") pod \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.357095 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content\") pod \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\" (UID: \"8b686e9f-336a-4780-bccd-f84cb2dc6f95\") " Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.358466 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities" (OuterVolumeSpecName: "utilities") pod "8b686e9f-336a-4780-bccd-f84cb2dc6f95" (UID: "8b686e9f-336a-4780-bccd-f84cb2dc6f95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.363786 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw" (OuterVolumeSpecName: "kube-api-access-l8rbw") pod "8b686e9f-336a-4780-bccd-f84cb2dc6f95" (UID: "8b686e9f-336a-4780-bccd-f84cb2dc6f95"). InnerVolumeSpecName "kube-api-access-l8rbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.459336 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.459451 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rbw\" (UniqueName: \"kubernetes.io/projected/8b686e9f-336a-4780-bccd-f84cb2dc6f95-kube-api-access-l8rbw\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.489106 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b686e9f-336a-4780-bccd-f84cb2dc6f95" (UID: "8b686e9f-336a-4780-bccd-f84cb2dc6f95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.561224 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b686e9f-336a-4780-bccd-f84cb2dc6f95-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.726589 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerID="eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728" exitCode=0 Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.726654 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerDied","Data":"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728"} Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.726688 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vlfr" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.726696 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vlfr" event={"ID":"8b686e9f-336a-4780-bccd-f84cb2dc6f95","Type":"ContainerDied","Data":"523eda261fcd9bddf799f3db3eb271640320243704e27302fc7ec5d7a5ce4fcc"} Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.726728 4861 scope.go:117] "RemoveContainer" containerID="eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.780428 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.789895 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vlfr"] Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.790000 4861 scope.go:117] "RemoveContainer" containerID="d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.851119 4861 scope.go:117] "RemoveContainer" containerID="0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.918870 4861 scope.go:117] "RemoveContainer" containerID="eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728" Mar 10 19:14:50 crc kubenswrapper[4861]: E0310 19:14:50.919399 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728\": container with ID starting with eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728 not found: ID does not exist" containerID="eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.919436 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728"} err="failed to get container status \"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728\": rpc error: code = NotFound desc = could not find container \"eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728\": container with ID starting with eb0948600904033672bd393e75bac17db272eba26a9ca81265e49c908478d728 not found: ID does not exist" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.919462 4861 scope.go:117] "RemoveContainer" containerID="d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb" Mar 10 19:14:50 crc kubenswrapper[4861]: E0310 19:14:50.919874 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb\": container with ID starting with d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb not found: ID does not exist" containerID="d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.919923 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb"} err="failed to get container status \"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb\": rpc error: code = NotFound desc = could not find container \"d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb\": container with ID starting with d37bd76ddcffdbb995c4b0c24d1754c55623fafd13474127f555afd752ca8ddb not found: ID does not exist" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.919978 4861 scope.go:117] "RemoveContainer" containerID="0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa" Mar 10 19:14:50 crc kubenswrapper[4861]: E0310 19:14:50.920446 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa\": container with ID starting with 0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa not found: ID does not exist" containerID="0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa" Mar 10 19:14:50 crc kubenswrapper[4861]: I0310 19:14:50.920518 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa"} err="failed to get container status \"0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa\": rpc error: code = NotFound desc = could not find container \"0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa\": container with ID starting with 0f6fc763e79df23b082c1a7bbcda9d511f47b61ed1f69719645f30221f5e0baa not found: ID does not exist" Mar 10 19:14:51 crc kubenswrapper[4861]: I0310 19:14:51.012431 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" path="/var/lib/kubelet/pods/8b686e9f-336a-4780-bccd-f84cb2dc6f95/volumes" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.159458 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw"] Mar 10 19:15:00 crc kubenswrapper[4861]: E0310 19:15:00.160497 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="extract-content" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.160521 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="extract-content" Mar 10 19:15:00 crc kubenswrapper[4861]: E0310 19:15:00.160547 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="registry-server" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.160559 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="registry-server" Mar 10 19:15:00 crc kubenswrapper[4861]: E0310 19:15:00.160580 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="extract-utilities" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.160594 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="extract-utilities" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.160916 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b686e9f-336a-4780-bccd-f84cb2dc6f95" containerName="registry-server" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.161593 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.165732 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.168597 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.170857 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw"] Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.219182 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzd8c\" (UniqueName: \"kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.219344 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.219449 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.321195 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.321356 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzd8c\" (UniqueName: \"kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.321445 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.322578 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.329886 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.341188 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzd8c\" (UniqueName: \"kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c\") pod \"collect-profiles-29552835-9kqxw\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.491377 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:00 crc kubenswrapper[4861]: I0310 19:15:00.975345 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw"] Mar 10 19:15:01 crc kubenswrapper[4861]: I0310 19:15:01.842787 4861 generic.go:334] "Generic (PLEG): container finished" podID="521110b4-5c02-417b-b8bd-19258d60721a" containerID="2ad6ead639225a06cd58adc72e687680a0246472ce173a4d7e75b0635551d57b" exitCode=0 Mar 10 19:15:01 crc kubenswrapper[4861]: I0310 19:15:01.842868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" event={"ID":"521110b4-5c02-417b-b8bd-19258d60721a","Type":"ContainerDied","Data":"2ad6ead639225a06cd58adc72e687680a0246472ce173a4d7e75b0635551d57b"} Mar 10 19:15:01 crc kubenswrapper[4861]: I0310 19:15:01.843147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" event={"ID":"521110b4-5c02-417b-b8bd-19258d60721a","Type":"ContainerStarted","Data":"e03ce1f210a19a95c48e4bd7ff0947b9f44af3a2a2c62d5504c2ece74ffed41d"} Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.216448 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.264844 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzd8c\" (UniqueName: \"kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c\") pod \"521110b4-5c02-417b-b8bd-19258d60721a\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.265072 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume\") pod \"521110b4-5c02-417b-b8bd-19258d60721a\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.265148 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume\") pod \"521110b4-5c02-417b-b8bd-19258d60721a\" (UID: \"521110b4-5c02-417b-b8bd-19258d60721a\") " Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.266522 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume" (OuterVolumeSpecName: "config-volume") pod "521110b4-5c02-417b-b8bd-19258d60721a" (UID: "521110b4-5c02-417b-b8bd-19258d60721a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.273151 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c" (OuterVolumeSpecName: "kube-api-access-tzd8c") pod "521110b4-5c02-417b-b8bd-19258d60721a" (UID: "521110b4-5c02-417b-b8bd-19258d60721a"). InnerVolumeSpecName "kube-api-access-tzd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.275617 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "521110b4-5c02-417b-b8bd-19258d60721a" (UID: "521110b4-5c02-417b-b8bd-19258d60721a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.366431 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzd8c\" (UniqueName: \"kubernetes.io/projected/521110b4-5c02-417b-b8bd-19258d60721a-kube-api-access-tzd8c\") on node \"crc\" DevicePath \"\"" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.366483 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/521110b4-5c02-417b-b8bd-19258d60721a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.366518 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/521110b4-5c02-417b-b8bd-19258d60721a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.867631 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" event={"ID":"521110b4-5c02-417b-b8bd-19258d60721a","Type":"ContainerDied","Data":"e03ce1f210a19a95c48e4bd7ff0947b9f44af3a2a2c62d5504c2ece74ffed41d"} Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.867692 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03ce1f210a19a95c48e4bd7ff0947b9f44af3a2a2c62d5504c2ece74ffed41d" Mar 10 19:15:03 crc kubenswrapper[4861]: I0310 19:15:03.867763 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.074072 4861 scope.go:117] "RemoveContainer" containerID="d0f2796a31ad17b58880cd786e5deccbce67bb62aee3d065d561b2ecb1de5145" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.115158 4861 scope.go:117] "RemoveContainer" containerID="71ebac9dc4f5ed0bb97bc92230d256314e35d04d7a598c26018aa0d732b96161" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.156745 4861 scope.go:117] "RemoveContainer" containerID="8489d2057b4b67b210b16c9a8105ade751f1d08e6e38f44ca2aeeea6b87c0618" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.193042 4861 scope.go:117] "RemoveContainer" containerID="f45dc011d5769ebd55f59d263575ef76c5b9b2f8cb40538674f36bf984d56643" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.220198 4861 scope.go:117] "RemoveContainer" containerID="b9c33f292294bee8955347aea33194f1e9cfcaee85d9226a6d22f5e1cc7c48e1" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.248910 4861 scope.go:117] "RemoveContainer" containerID="abf5b871d4a2a457c99ae258708caf4c2792ff3f5f74cbe1f6603b5e8ac728b8" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.277744 4861 scope.go:117] "RemoveContainer" containerID="048bd7dd923085ffacce2a66e4d63d31afb2850a9c661487d041fc5c2b7edd59" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.303101 4861 scope.go:117] "RemoveContainer" containerID="bc9707b6901732303cb7926eee46191715b22462440f7e5d13b04d9979f6ad15" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.319762 4861 scope.go:117] "RemoveContainer" containerID="979a346e65c8ce0258d18338e42cfc894cd1f93119f4e9b0471bb23704ff0183" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.340253 4861 scope.go:117] "RemoveContainer" containerID="960d469098c306d03a004474f14e0fa70ddd0b37ac373fac70d6eddda456c35f" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.374556 4861 scope.go:117] "RemoveContainer" containerID="a975e4595b3b7dd38d3d3df2c54181679f5a2f3131f2fe87bf6bb8bc23d88904" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.399841 4861 scope.go:117] "RemoveContainer" containerID="ee002433f78bbf5cdee3b5fdcbfa95ad2359ab94c9a3617972159ab8bb34f811" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.426517 4861 scope.go:117] "RemoveContainer" containerID="2db2d992b0ae0a470655ee33ae92a70c36f104d929da6f63364353103bc2cf93" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.451063 4861 scope.go:117] "RemoveContainer" containerID="69bafd7dad1f5a2b95d84e1f5051045a4ea3a3973fecf9fd65590141943352de" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.474392 4861 scope.go:117] "RemoveContainer" containerID="b8ab7c40d9ea1021c05dabfee1dbe230b90ce92bb3c83b4fcb3e6a5f1f04b3dc" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.503256 4861 scope.go:117] "RemoveContainer" containerID="16ca45cd16cc1584ad8cef9a1b1b8da9c99f78ab005875f8da178f12e538625d" Mar 10 19:15:51 crc kubenswrapper[4861]: I0310 19:15:51.532478 4861 scope.go:117] "RemoveContainer" containerID="63cfc8310694fae8a67d0927f3abb52f73bbeca623136d9dda1e0c74522bd5e6" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.162100 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552836-ldssf"] Mar 10 19:16:00 crc kubenswrapper[4861]: E0310 19:16:00.163259 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521110b4-5c02-417b-b8bd-19258d60721a" containerName="collect-profiles" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.163289 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="521110b4-5c02-417b-b8bd-19258d60721a" containerName="collect-profiles" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.163668 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="521110b4-5c02-417b-b8bd-19258d60721a" containerName="collect-profiles" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.164659 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.169799 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.170094 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.170227 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.174756 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552836-ldssf"] Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.324342 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wg4\" (UniqueName: \"kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4\") pod \"auto-csr-approver-29552836-ldssf\" (UID: \"3897ce4d-cb11-41c9-a417-954b090c49d8\") " pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.426740 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52wg4\" (UniqueName: \"kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4\") pod \"auto-csr-approver-29552836-ldssf\" (UID: \"3897ce4d-cb11-41c9-a417-954b090c49d8\") " pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.454786 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wg4\" (UniqueName: \"kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4\") pod \"auto-csr-approver-29552836-ldssf\" (UID: \"3897ce4d-cb11-41c9-a417-954b090c49d8\") " pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.494829 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:00 crc kubenswrapper[4861]: I0310 19:16:00.952385 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552836-ldssf"] Mar 10 19:16:01 crc kubenswrapper[4861]: I0310 19:16:01.514256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552836-ldssf" event={"ID":"3897ce4d-cb11-41c9-a417-954b090c49d8","Type":"ContainerStarted","Data":"be14e83511d589882f365635d88f2ecb2c2f2902fcd0458566612ca9c7480d85"} Mar 10 19:16:02 crc kubenswrapper[4861]: I0310 19:16:02.525448 4861 generic.go:334] "Generic (PLEG): container finished" podID="3897ce4d-cb11-41c9-a417-954b090c49d8" containerID="8bf7c3623a0c4efb8b1ec2dd3aff93191a484c532e18becc483c44b072ce97b6" exitCode=0 Mar 10 19:16:02 crc kubenswrapper[4861]: I0310 19:16:02.525541 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552836-ldssf" event={"ID":"3897ce4d-cb11-41c9-a417-954b090c49d8","Type":"ContainerDied","Data":"8bf7c3623a0c4efb8b1ec2dd3aff93191a484c532e18becc483c44b072ce97b6"} Mar 10 19:16:03 crc kubenswrapper[4861]: I0310 19:16:03.950422 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.084502 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52wg4\" (UniqueName: \"kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4\") pod \"3897ce4d-cb11-41c9-a417-954b090c49d8\" (UID: \"3897ce4d-cb11-41c9-a417-954b090c49d8\") " Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.094060 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4" (OuterVolumeSpecName: "kube-api-access-52wg4") pod "3897ce4d-cb11-41c9-a417-954b090c49d8" (UID: "3897ce4d-cb11-41c9-a417-954b090c49d8"). InnerVolumeSpecName "kube-api-access-52wg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.187600 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52wg4\" (UniqueName: \"kubernetes.io/projected/3897ce4d-cb11-41c9-a417-954b090c49d8-kube-api-access-52wg4\") on node \"crc\" DevicePath \"\"" Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.547464 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552836-ldssf" event={"ID":"3897ce4d-cb11-41c9-a417-954b090c49d8","Type":"ContainerDied","Data":"be14e83511d589882f365635d88f2ecb2c2f2902fcd0458566612ca9c7480d85"} Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.547519 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be14e83511d589882f365635d88f2ecb2c2f2902fcd0458566612ca9c7480d85" Mar 10 19:16:04 crc kubenswrapper[4861]: I0310 19:16:04.548061 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552836-ldssf" Mar 10 19:16:05 crc kubenswrapper[4861]: I0310 19:16:05.063688 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552830-7hhcb"] Mar 10 19:16:05 crc kubenswrapper[4861]: I0310 19:16:05.071900 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552830-7hhcb"] Mar 10 19:16:06 crc kubenswrapper[4861]: I0310 19:16:06.973522 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9397cb-c907-4d9f-ae4c-3f05de941f27" path="/var/lib/kubelet/pods/4d9397cb-c907-4d9f-ae4c-3f05de941f27/volumes" Mar 10 19:16:21 crc kubenswrapper[4861]: I0310 19:16:21.992334 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:16:21 crc kubenswrapper[4861]: I0310 19:16:21.993075 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:16:51 crc kubenswrapper[4861]: I0310 19:16:51.881986 4861 scope.go:117] "RemoveContainer" containerID="43f123c0581adc24eaa058b8a1d394322a8ea88fb61d0307abdf6ca7abcab134" Mar 10 19:16:51 crc kubenswrapper[4861]: I0310 19:16:51.915332 4861 scope.go:117] "RemoveContainer" containerID="f7918ba51ca439a75a3ced40d27c3fb45f087302e7b2a88425b3296924759c81" Mar 10 19:16:51 crc kubenswrapper[4861]: I0310 19:16:51.978178 4861 scope.go:117] "RemoveContainer" containerID="eed081b92c9466dcc219501d89f20fbe1648f53ae2509508a4d353b88a2bc628" Mar 10 19:16:51 crc kubenswrapper[4861]: I0310 19:16:51.992372 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:16:51 crc kubenswrapper[4861]: I0310 19:16:51.992472 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.056540 4861 scope.go:117] "RemoveContainer" containerID="7803ce28300aa191ada57c67092ad96a7b0054358e22b987a4af6e101f44b55e" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.086986 4861 scope.go:117] "RemoveContainer" containerID="9b2c6159ab007b38b614fd4c4579cf1344df9f222609b84392777278da63caa0" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.176916 4861 scope.go:117] "RemoveContainer" containerID="87fb93e566a663de0b618099d7590f0336c0350139f6d4b367758d4d661aeb71" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.230383 4861 scope.go:117] "RemoveContainer" containerID="cb2314e50a6b4f63e3306cb1d3eefcb1635b36a5b5a0d73ed50ada3f1b695028" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.302331 4861 scope.go:117] "RemoveContainer" containerID="48b25e23776ff4d9e34f3bdc68c1cd16379a049d6d13bef32e35403bf088e484" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.334684 4861 scope.go:117] "RemoveContainer" containerID="ec276b3d208faaad4a1ba96c2620fedc6e5b7e5ca9dca86369c390bbfb6ddfdf" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.372139 4861 scope.go:117] "RemoveContainer" containerID="058c0eed05761c6e3e065155019887bcb381e6a66a552252397daedb1770da87" Mar 10 19:16:52 crc kubenswrapper[4861]: I0310 19:16:52.412004 4861 scope.go:117] "RemoveContainer" containerID="06d0f6cf0f2d406fc184a0557e435f68af1044052fead949e95e45b22d9abb5e" Mar 10 19:17:21 crc kubenswrapper[4861]: I0310 19:17:21.992210 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:17:21 crc kubenswrapper[4861]: I0310 19:17:21.992956 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:17:21 crc kubenswrapper[4861]: I0310 19:17:21.993027 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:17:21 crc kubenswrapper[4861]: I0310 19:17:21.994091 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:17:21 crc kubenswrapper[4861]: I0310 19:17:21.994193 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" gracePeriod=600 Mar 10 19:17:22 crc kubenswrapper[4861]: E0310 19:17:22.127755 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:17:22 crc kubenswrapper[4861]: I0310 19:17:22.914654 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" exitCode=0 Mar 10 19:17:22 crc kubenswrapper[4861]: I0310 19:17:22.914743 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53"} Mar 10 19:17:22 crc kubenswrapper[4861]: I0310 19:17:22.914801 4861 scope.go:117] "RemoveContainer" containerID="af91ea5d3fbd1ec239d0d9d5246031cccff13cb031bdcbb0edc5d3cf4aa77e7d" Mar 10 19:17:22 crc kubenswrapper[4861]: I0310 19:17:22.915495 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:17:22 crc kubenswrapper[4861]: E0310 19:17:22.916087 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:17:37 crc kubenswrapper[4861]: I0310 19:17:37.958485 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:17:37 crc kubenswrapper[4861]: E0310 19:17:37.959817 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.692869 4861 scope.go:117] "RemoveContainer" containerID="6d987ece4dd051bd98e8361217e86ae2fa65c0960145192eb7173b18775b49a5" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.732022 4861 scope.go:117] "RemoveContainer" containerID="5cc8655b0aee62791274d55957ff7a54178e2f1a83cdb5b1207033c268396b15" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.750807 4861 scope.go:117] "RemoveContainer" containerID="69864ee6a4aa226b8bfa33bff3965bcd0f4bc2c9e7031e9ccbe3edb3aaf4f16c" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.773297 4861 scope.go:117] "RemoveContainer" containerID="689681124f00ab24f83edf4e74e95e43f454ab1641998fc54dafd3be3c70f0ae" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.796682 4861 scope.go:117] "RemoveContainer" containerID="c8036046abc5482ee5c300c86709379ab79593dc384da644048703e8766aed37" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.824091 4861 scope.go:117] "RemoveContainer" containerID="cf206f80b2a9fbb6741e9b611d3178739258c140f0f5e9fd2dbecf804b0f2642" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.849643 4861 scope.go:117] "RemoveContainer" containerID="6db1199a270e11a4fd456859f2f886658787019bc3921d6921358e8c10d83292" Mar 10 19:17:52 crc kubenswrapper[4861]: I0310 19:17:52.958555 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:17:52 crc kubenswrapper[4861]: E0310 19:17:52.958872 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.155340 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552838-5l2vc"] Mar 10 19:18:00 crc kubenswrapper[4861]: E0310 19:18:00.157203 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3897ce4d-cb11-41c9-a417-954b090c49d8" containerName="oc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.157308 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3897ce4d-cb11-41c9-a417-954b090c49d8" containerName="oc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.157586 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3897ce4d-cb11-41c9-a417-954b090c49d8" containerName="oc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.158184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.161053 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.161089 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.161365 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.194904 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552838-5l2vc"] Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.212557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rq94\" (UniqueName: \"kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94\") pod \"auto-csr-approver-29552838-5l2vc\" (UID: \"3f2ff452-be6e-427e-84e8-87e7107f87a2\") " pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.313764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rq94\" (UniqueName: \"kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94\") pod \"auto-csr-approver-29552838-5l2vc\" (UID: \"3f2ff452-be6e-427e-84e8-87e7107f87a2\") " pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.346448 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rq94\" (UniqueName: \"kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94\") pod \"auto-csr-approver-29552838-5l2vc\" (UID: \"3f2ff452-be6e-427e-84e8-87e7107f87a2\") " pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.499547 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:00 crc kubenswrapper[4861]: I0310 19:18:00.996864 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552838-5l2vc"] Mar 10 19:18:01 crc kubenswrapper[4861]: I0310 19:18:01.301549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" event={"ID":"3f2ff452-be6e-427e-84e8-87e7107f87a2","Type":"ContainerStarted","Data":"987cec89a853ed9165889a8fbc850403581ce3434824501b51f7e679436d9e6e"} Mar 10 19:18:03 crc kubenswrapper[4861]: I0310 19:18:03.326342 4861 generic.go:334] "Generic (PLEG): container finished" podID="3f2ff452-be6e-427e-84e8-87e7107f87a2" containerID="58cfd3b4243bdeeca17f93579b19eef5333d1cea780b9b326c5f70bda9a525ee" exitCode=0 Mar 10 19:18:03 crc kubenswrapper[4861]: I0310 19:18:03.326470 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" event={"ID":"3f2ff452-be6e-427e-84e8-87e7107f87a2","Type":"ContainerDied","Data":"58cfd3b4243bdeeca17f93579b19eef5333d1cea780b9b326c5f70bda9a525ee"} Mar 10 19:18:04 crc kubenswrapper[4861]: I0310 19:18:04.649947 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:04 crc kubenswrapper[4861]: I0310 19:18:04.795423 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rq94\" (UniqueName: \"kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94\") pod \"3f2ff452-be6e-427e-84e8-87e7107f87a2\" (UID: \"3f2ff452-be6e-427e-84e8-87e7107f87a2\") " Mar 10 19:18:04 crc kubenswrapper[4861]: I0310 19:18:04.804013 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94" (OuterVolumeSpecName: "kube-api-access-2rq94") pod "3f2ff452-be6e-427e-84e8-87e7107f87a2" (UID: "3f2ff452-be6e-427e-84e8-87e7107f87a2"). InnerVolumeSpecName "kube-api-access-2rq94". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:18:04 crc kubenswrapper[4861]: I0310 19:18:04.897792 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rq94\" (UniqueName: \"kubernetes.io/projected/3f2ff452-be6e-427e-84e8-87e7107f87a2-kube-api-access-2rq94\") on node \"crc\" DevicePath \"\"" Mar 10 19:18:05 crc kubenswrapper[4861]: I0310 19:18:05.345166 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" event={"ID":"3f2ff452-be6e-427e-84e8-87e7107f87a2","Type":"ContainerDied","Data":"987cec89a853ed9165889a8fbc850403581ce3434824501b51f7e679436d9e6e"} Mar 10 19:18:05 crc kubenswrapper[4861]: I0310 19:18:05.345215 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="987cec89a853ed9165889a8fbc850403581ce3434824501b51f7e679436d9e6e" Mar 10 19:18:05 crc kubenswrapper[4861]: I0310 19:18:05.345287 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552838-5l2vc" Mar 10 19:18:05 crc kubenswrapper[4861]: I0310 19:18:05.745972 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552832-b64zr"] Mar 10 19:18:05 crc kubenswrapper[4861]: I0310 19:18:05.753149 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552832-b64zr"] Mar 10 19:18:06 crc kubenswrapper[4861]: I0310 19:18:06.966563 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:18:06 crc kubenswrapper[4861]: E0310 19:18:06.967047 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:18:06 crc kubenswrapper[4861]: I0310 19:18:06.971289 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85716ed5-50f3-4f75-9d6c-236dcf24e46d" path="/var/lib/kubelet/pods/85716ed5-50f3-4f75-9d6c-236dcf24e46d/volumes" Mar 10 19:18:21 crc kubenswrapper[4861]: I0310 19:18:21.959405 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:18:21 crc kubenswrapper[4861]: E0310 19:18:21.960398 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:18:33 crc kubenswrapper[4861]: I0310 19:18:33.958881 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:18:33 crc kubenswrapper[4861]: E0310 19:18:33.959844 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:18:47 crc kubenswrapper[4861]: I0310 19:18:47.959295 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:18:47 crc kubenswrapper[4861]: E0310 19:18:47.960312 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:18:52 crc kubenswrapper[4861]: I0310 19:18:52.962285 4861 scope.go:117] "RemoveContainer" containerID="49fe551f811518da53606cf89dd2db72b7f22f467a852d6c1dc6cddda9fb1eec" Mar 10 19:18:52 crc kubenswrapper[4861]: I0310 19:18:52.991651 4861 scope.go:117] "RemoveContainer" containerID="28badc523489d1234ded439c2f68701ee34d03e427f5b474008ca60684fe6e7c" Mar 10 19:18:53 crc kubenswrapper[4861]: I0310 19:18:53.050430 4861 scope.go:117] "RemoveContainer" containerID="04ce0706c6df9f9f618c249d98f5c441eb8d9452970f2233d8201342509548ad" Mar 10 19:18:53 crc kubenswrapper[4861]: I0310 19:18:53.083497 4861 scope.go:117] "RemoveContainer" containerID="83605c45bb5cb59c7e5876bd4f566208ed4c6a08abfc7535626e8b49a51feb43" Mar 10 19:18:53 crc kubenswrapper[4861]: I0310 19:18:53.138671 4861 scope.go:117] "RemoveContainer" containerID="2696f61078f0a1a74b63a575d712dd26e91f63e6a86f67da6deadc2dfdfe94cc" Mar 10 19:18:53 crc kubenswrapper[4861]: I0310 19:18:53.165785 4861 scope.go:117] "RemoveContainer" containerID="2873c36f54704a23a3c98319164da99274e57bcb8f5d04bfec19c8a4567f332d" Mar 10 19:18:53 crc kubenswrapper[4861]: I0310 19:18:53.192476 4861 scope.go:117] "RemoveContainer" containerID="2ec7ad64a891a24b56291d63645fd2ca313e41e4fb9cb01ef30f750e11fe9c88" Mar 10 19:19:02 crc kubenswrapper[4861]: I0310 19:19:02.958321 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:19:02 crc kubenswrapper[4861]: E0310 19:19:02.961337 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:19:16 crc kubenswrapper[4861]: I0310 19:19:16.965573 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:19:16 crc kubenswrapper[4861]: E0310 19:19:16.966655 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:19:28 crc kubenswrapper[4861]: I0310 19:19:28.958337 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:19:28 crc kubenswrapper[4861]: E0310 19:19:28.959201 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:19:41 crc kubenswrapper[4861]: I0310 19:19:41.958438 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:19:41 crc kubenswrapper[4861]: E0310 19:19:41.959489 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.292010 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:19:52 crc kubenswrapper[4861]: E0310 19:19:52.293027 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2ff452-be6e-427e-84e8-87e7107f87a2" containerName="oc" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.293048 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2ff452-be6e-427e-84e8-87e7107f87a2" containerName="oc" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.293323 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2ff452-be6e-427e-84e8-87e7107f87a2" containerName="oc" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.298557 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.340784 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.425189 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.425363 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.425594 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w5l7\" (UniqueName: \"kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.526814 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.526873 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.526924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w5l7\" (UniqueName: \"kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.527385 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.527607 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.546485 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w5l7\" (UniqueName: \"kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7\") pod \"community-operators-fb5tv\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:52 crc kubenswrapper[4861]: I0310 19:19:52.658406 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:19:53 crc kubenswrapper[4861]: I0310 19:19:53.148502 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:19:53 crc kubenswrapper[4861]: W0310 19:19:53.158773 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f8cadf_49f7_4191_a9f5_522d2fcacc2d.slice/crio-c063ab246684949908b05ba13de580b7bca94adae8e5be547d4ed1388ce628ee WatchSource:0}: Error finding container c063ab246684949908b05ba13de580b7bca94adae8e5be547d4ed1388ce628ee: Status 404 returned error can't find the container with id c063ab246684949908b05ba13de580b7bca94adae8e5be547d4ed1388ce628ee Mar 10 19:19:53 crc kubenswrapper[4861]: I0310 19:19:53.314980 4861 scope.go:117] "RemoveContainer" containerID="eab553f654758da04924379d977d326b7bdacacc8303db5c759adcd672af64ab" Mar 10 19:19:53 crc kubenswrapper[4861]: I0310 19:19:53.410498 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerStarted","Data":"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6"} Mar 10 19:19:53 crc kubenswrapper[4861]: I0310 19:19:53.410732 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerStarted","Data":"c063ab246684949908b05ba13de580b7bca94adae8e5be547d4ed1388ce628ee"} Mar 10 19:19:53 crc kubenswrapper[4861]: I0310 19:19:53.412018 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:19:54 crc kubenswrapper[4861]: I0310 19:19:54.423122 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerID="8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6" exitCode=0 Mar 10 19:19:54 crc kubenswrapper[4861]: I0310 19:19:54.423184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerDied","Data":"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6"} Mar 10 19:19:54 crc kubenswrapper[4861]: I0310 19:19:54.423225 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerStarted","Data":"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75"} Mar 10 19:19:55 crc kubenswrapper[4861]: I0310 19:19:55.437981 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerID="4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75" exitCode=0 Mar 10 19:19:55 crc kubenswrapper[4861]: I0310 19:19:55.438148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerDied","Data":"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75"} Mar 10 19:19:55 crc kubenswrapper[4861]: I0310 19:19:55.958794 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:19:55 crc kubenswrapper[4861]: E0310 19:19:55.959174 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:19:57 crc kubenswrapper[4861]: I0310 19:19:57.487925 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerStarted","Data":"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672"} Mar 10 19:19:57 crc kubenswrapper[4861]: I0310 19:19:57.510906 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fb5tv" podStartSLOduration=2.6159092790000003 podStartE2EDuration="5.510887161s" podCreationTimestamp="2026-03-10 19:19:52 +0000 UTC" firstStartedPulling="2026-03-10 19:19:53.411807964 +0000 UTC m=+1937.175243924" lastFinishedPulling="2026-03-10 19:19:56.306785806 +0000 UTC m=+1940.070221806" observedRunningTime="2026-03-10 19:19:57.509080651 +0000 UTC m=+1941.272516631" watchObservedRunningTime="2026-03-10 19:19:57.510887161 +0000 UTC m=+1941.274323121" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.156952 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552840-lrsfm"] Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.159295 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.165184 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.165916 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.166312 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.166620 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552840-lrsfm"] Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.258116 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sfj\" (UniqueName: \"kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj\") pod \"auto-csr-approver-29552840-lrsfm\" (UID: \"0b5e8bdf-e700-4d70-9054-f91863ac0eae\") " pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.359427 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sfj\" (UniqueName: \"kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj\") pod \"auto-csr-approver-29552840-lrsfm\" (UID: \"0b5e8bdf-e700-4d70-9054-f91863ac0eae\") " pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.392560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sfj\" (UniqueName: \"kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj\") pod \"auto-csr-approver-29552840-lrsfm\" (UID: \"0b5e8bdf-e700-4d70-9054-f91863ac0eae\") " pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.494647 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:00 crc kubenswrapper[4861]: I0310 19:20:00.972228 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552840-lrsfm"] Mar 10 19:20:00 crc kubenswrapper[4861]: W0310 19:20:00.972490 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b5e8bdf_e700_4d70_9054_f91863ac0eae.slice/crio-50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee WatchSource:0}: Error finding container 50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee: Status 404 returned error can't find the container with id 50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee Mar 10 19:20:01 crc kubenswrapper[4861]: I0310 19:20:01.528458 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" event={"ID":"0b5e8bdf-e700-4d70-9054-f91863ac0eae","Type":"ContainerStarted","Data":"50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee"} Mar 10 19:20:02 crc kubenswrapper[4861]: I0310 19:20:02.539390 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" event={"ID":"0b5e8bdf-e700-4d70-9054-f91863ac0eae","Type":"ContainerStarted","Data":"664ca4596dba71c5845a9cb615748f525990fb04d5d37fc0149f009f2fcbacd8"} Mar 10 19:20:02 crc kubenswrapper[4861]: I0310 19:20:02.579543 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" podStartSLOduration=1.3885359290000001 podStartE2EDuration="2.579519383s" podCreationTimestamp="2026-03-10 19:20:00 +0000 UTC" firstStartedPulling="2026-03-10 19:20:00.975178895 +0000 UTC m=+1944.738614855" lastFinishedPulling="2026-03-10 19:20:02.166162309 +0000 UTC m=+1945.929598309" observedRunningTime="2026-03-10 19:20:02.571084931 +0000 UTC m=+1946.334520911" watchObservedRunningTime="2026-03-10 19:20:02.579519383 +0000 UTC m=+1946.342955363" Mar 10 19:20:02 crc kubenswrapper[4861]: I0310 19:20:02.659066 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:02 crc kubenswrapper[4861]: I0310 19:20:02.659124 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:02 crc kubenswrapper[4861]: I0310 19:20:02.742976 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:03 crc kubenswrapper[4861]: I0310 19:20:03.549146 4861 generic.go:334] "Generic (PLEG): container finished" podID="0b5e8bdf-e700-4d70-9054-f91863ac0eae" containerID="664ca4596dba71c5845a9cb615748f525990fb04d5d37fc0149f009f2fcbacd8" exitCode=0 Mar 10 19:20:03 crc kubenswrapper[4861]: I0310 19:20:03.549390 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" event={"ID":"0b5e8bdf-e700-4d70-9054-f91863ac0eae","Type":"ContainerDied","Data":"664ca4596dba71c5845a9cb615748f525990fb04d5d37fc0149f009f2fcbacd8"} Mar 10 19:20:03 crc kubenswrapper[4861]: I0310 19:20:03.603595 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:03 crc kubenswrapper[4861]: I0310 19:20:03.667391 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:20:04 crc kubenswrapper[4861]: I0310 19:20:04.876095 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:04 crc kubenswrapper[4861]: I0310 19:20:04.937988 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6sfj\" (UniqueName: \"kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj\") pod \"0b5e8bdf-e700-4d70-9054-f91863ac0eae\" (UID: \"0b5e8bdf-e700-4d70-9054-f91863ac0eae\") " Mar 10 19:20:04 crc kubenswrapper[4861]: I0310 19:20:04.943437 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj" (OuterVolumeSpecName: "kube-api-access-v6sfj") pod "0b5e8bdf-e700-4d70-9054-f91863ac0eae" (UID: "0b5e8bdf-e700-4d70-9054-f91863ac0eae"). InnerVolumeSpecName "kube-api-access-v6sfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.040333 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6sfj\" (UniqueName: \"kubernetes.io/projected/0b5e8bdf-e700-4d70-9054-f91863ac0eae-kube-api-access-v6sfj\") on node \"crc\" DevicePath \"\"" Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.567415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" event={"ID":"0b5e8bdf-e700-4d70-9054-f91863ac0eae","Type":"ContainerDied","Data":"50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee"} Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.567441 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552840-lrsfm" Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.567457 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e186f330e798bf3c2836d307d33718fd68fa631d8236bc9b84e9e4bc3a4eee" Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.567777 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fb5tv" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="registry-server" containerID="cri-o://3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672" gracePeriod=2 Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.981761 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552834-dlwsb"] Mar 10 19:20:05 crc kubenswrapper[4861]: I0310 19:20:05.988257 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552834-dlwsb"] Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.052482 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.158028 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content\") pod \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.158244 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w5l7\" (UniqueName: \"kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7\") pod \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.158328 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities\") pod \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\" (UID: \"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d\") " Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.159436 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities" (OuterVolumeSpecName: "utilities") pod "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" (UID: "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.164389 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7" (OuterVolumeSpecName: "kube-api-access-9w5l7") pod "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" (UID: "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d"). InnerVolumeSpecName "kube-api-access-9w5l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.221472 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" (UID: "d3f8cadf-49f7-4191-a9f5-522d2fcacc2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.259551 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w5l7\" (UniqueName: \"kubernetes.io/projected/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-kube-api-access-9w5l7\") on node \"crc\" DevicePath \"\"" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.259810 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.259895 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.580032 4861 generic.go:334] "Generic (PLEG): container finished" podID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerID="3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672" exitCode=0 Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.580161 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5tv" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.580159 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerDied","Data":"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672"} Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.580546 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5tv" event={"ID":"d3f8cadf-49f7-4191-a9f5-522d2fcacc2d","Type":"ContainerDied","Data":"c063ab246684949908b05ba13de580b7bca94adae8e5be547d4ed1388ce628ee"} Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.580594 4861 scope.go:117] "RemoveContainer" containerID="3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.621644 4861 scope.go:117] "RemoveContainer" containerID="4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.626462 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.640488 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fb5tv"] Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.654103 4861 scope.go:117] "RemoveContainer" containerID="8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.680091 4861 scope.go:117] "RemoveContainer" containerID="3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672" Mar 10 19:20:06 crc kubenswrapper[4861]: E0310 19:20:06.680932 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672\": container with ID starting with 3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672 not found: ID does not exist" containerID="3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.681102 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672"} err="failed to get container status \"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672\": rpc error: code = NotFound desc = could not find container \"3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672\": container with ID starting with 3f116c2f411d9944f8d61e1b68a9243f60668a6d45a6c9c2ef59e9a221ac2672 not found: ID does not exist" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.681267 4861 scope.go:117] "RemoveContainer" containerID="4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75" Mar 10 19:20:06 crc kubenswrapper[4861]: E0310 19:20:06.682015 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75\": container with ID starting with 4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75 not found: ID does not exist" containerID="4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.682059 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75"} err="failed to get container status \"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75\": rpc error: code = NotFound desc = could not find container \"4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75\": container with ID starting with 4bb88696dadf318e2cba393132105489e5f3cbc9d9e898fbc462573e31c2cb75 not found: ID does not exist" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.682088 4861 scope.go:117] "RemoveContainer" containerID="8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6" Mar 10 19:20:06 crc kubenswrapper[4861]: E0310 19:20:06.682526 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6\": container with ID starting with 8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6 not found: ID does not exist" containerID="8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.682590 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6"} err="failed to get container status \"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6\": rpc error: code = NotFound desc = could not find container \"8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6\": container with ID starting with 8cf466e691f3db23e9ac1efd013f0474fef1136fb49f1b0adca931dd9ae783e6 not found: ID does not exist" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.972228 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e5ef26-1440-4ffb-84d9-2bc5e0bed45c" path="/var/lib/kubelet/pods/91e5ef26-1440-4ffb-84d9-2bc5e0bed45c/volumes" Mar 10 19:20:06 crc kubenswrapper[4861]: I0310 19:20:06.973858 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" path="/var/lib/kubelet/pods/d3f8cadf-49f7-4191-a9f5-522d2fcacc2d/volumes" Mar 10 19:20:09 crc kubenswrapper[4861]: I0310 19:20:09.958911 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:20:09 crc kubenswrapper[4861]: E0310 19:20:09.959587 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:20:21 crc kubenswrapper[4861]: I0310 19:20:21.958335 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:20:21 crc kubenswrapper[4861]: E0310 19:20:21.960166 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:20:32 crc kubenswrapper[4861]: I0310 19:20:32.959190 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:20:32 crc kubenswrapper[4861]: E0310 19:20:32.960265 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:20:47 crc kubenswrapper[4861]: I0310 19:20:47.959184 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:20:47 crc kubenswrapper[4861]: E0310 19:20:47.960478 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:20:53 crc kubenswrapper[4861]: I0310 19:20:53.398393 4861 scope.go:117] "RemoveContainer" containerID="451f09d0b8ba6e81751ef4eea6bf40678dba1a95fba42d8522240189199186fe" Mar 10 19:21:01 crc kubenswrapper[4861]: I0310 19:21:01.958992 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:21:01 crc kubenswrapper[4861]: E0310 19:21:01.962088 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:21:16 crc kubenswrapper[4861]: I0310 19:21:16.963756 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:21:16 crc kubenswrapper[4861]: E0310 19:21:16.964923 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:21:28 crc kubenswrapper[4861]: I0310 19:21:28.958610 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:21:28 crc kubenswrapper[4861]: E0310 19:21:28.959638 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:21:40 crc kubenswrapper[4861]: I0310 19:21:40.960420 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:21:40 crc kubenswrapper[4861]: E0310 19:21:40.961387 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:21:53 crc kubenswrapper[4861]: I0310 19:21:53.958922 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:21:53 crc kubenswrapper[4861]: E0310 19:21:53.960097 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.169769 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552842-hsc2l"] Mar 10 19:22:00 crc kubenswrapper[4861]: E0310 19:22:00.171431 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="extract-utilities" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171455 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="extract-utilities" Mar 10 19:22:00 crc kubenswrapper[4861]: E0310 19:22:00.171532 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5e8bdf-e700-4d70-9054-f91863ac0eae" containerName="oc" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171546 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5e8bdf-e700-4d70-9054-f91863ac0eae" containerName="oc" Mar 10 19:22:00 crc kubenswrapper[4861]: E0310 19:22:00.171567 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="extract-content" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171579 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="extract-content" Mar 10 19:22:00 crc kubenswrapper[4861]: E0310 19:22:00.171598 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="registry-server" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171612 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="registry-server" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171874 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f8cadf-49f7-4191-a9f5-522d2fcacc2d" containerName="registry-server" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.171911 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5e8bdf-e700-4d70-9054-f91863ac0eae" containerName="oc" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.172572 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.175321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.175700 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.179171 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.216208 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552842-hsc2l"] Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.319277 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrkc\" (UniqueName: \"kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc\") pod \"auto-csr-approver-29552842-hsc2l\" (UID: \"f3849f7d-1618-43e8-b5c8-903e96bece99\") " pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.421554 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlrkc\" (UniqueName: \"kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc\") pod \"auto-csr-approver-29552842-hsc2l\" (UID: \"f3849f7d-1618-43e8-b5c8-903e96bece99\") " pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.451994 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlrkc\" (UniqueName: \"kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc\") pod \"auto-csr-approver-29552842-hsc2l\" (UID: \"f3849f7d-1618-43e8-b5c8-903e96bece99\") " pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.539933 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:00 crc kubenswrapper[4861]: I0310 19:22:00.798210 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552842-hsc2l"] Mar 10 19:22:01 crc kubenswrapper[4861]: I0310 19:22:01.730612 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" event={"ID":"f3849f7d-1618-43e8-b5c8-903e96bece99","Type":"ContainerStarted","Data":"ff72ca028ac64a676a83e3e2edf35564417d5e9ce31400b1dd7f77ba17262b84"} Mar 10 19:22:02 crc kubenswrapper[4861]: I0310 19:22:02.741056 4861 generic.go:334] "Generic (PLEG): container finished" podID="f3849f7d-1618-43e8-b5c8-903e96bece99" containerID="572bc5c02f5f758406e7180080fa4e3d6ed8f8aa6c3631fe7b07b80d8c588238" exitCode=0 Mar 10 19:22:02 crc kubenswrapper[4861]: I0310 19:22:02.741176 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" event={"ID":"f3849f7d-1618-43e8-b5c8-903e96bece99","Type":"ContainerDied","Data":"572bc5c02f5f758406e7180080fa4e3d6ed8f8aa6c3631fe7b07b80d8c588238"} Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.168680 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.298930 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlrkc\" (UniqueName: \"kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc\") pod \"f3849f7d-1618-43e8-b5c8-903e96bece99\" (UID: \"f3849f7d-1618-43e8-b5c8-903e96bece99\") " Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.307913 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc" (OuterVolumeSpecName: "kube-api-access-tlrkc") pod "f3849f7d-1618-43e8-b5c8-903e96bece99" (UID: "f3849f7d-1618-43e8-b5c8-903e96bece99"). InnerVolumeSpecName "kube-api-access-tlrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.400595 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlrkc\" (UniqueName: \"kubernetes.io/projected/f3849f7d-1618-43e8-b5c8-903e96bece99-kube-api-access-tlrkc\") on node \"crc\" DevicePath \"\"" Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.766112 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" event={"ID":"f3849f7d-1618-43e8-b5c8-903e96bece99","Type":"ContainerDied","Data":"ff72ca028ac64a676a83e3e2edf35564417d5e9ce31400b1dd7f77ba17262b84"} Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.766525 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff72ca028ac64a676a83e3e2edf35564417d5e9ce31400b1dd7f77ba17262b84" Mar 10 19:22:04 crc kubenswrapper[4861]: I0310 19:22:04.766665 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552842-hsc2l" Mar 10 19:22:05 crc kubenswrapper[4861]: I0310 19:22:05.271278 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552836-ldssf"] Mar 10 19:22:05 crc kubenswrapper[4861]: I0310 19:22:05.280599 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552836-ldssf"] Mar 10 19:22:06 crc kubenswrapper[4861]: I0310 19:22:06.972767 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3897ce4d-cb11-41c9-a417-954b090c49d8" path="/var/lib/kubelet/pods/3897ce4d-cb11-41c9-a417-954b090c49d8/volumes" Mar 10 19:22:08 crc kubenswrapper[4861]: I0310 19:22:08.959484 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:22:08 crc kubenswrapper[4861]: E0310 19:22:08.960192 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:22:22 crc kubenswrapper[4861]: I0310 19:22:22.958937 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:22:23 crc kubenswrapper[4861]: I0310 19:22:23.969030 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9"} Mar 10 19:22:53 crc kubenswrapper[4861]: I0310 19:22:53.536670 4861 scope.go:117] "RemoveContainer" containerID="8bf7c3623a0c4efb8b1ec2dd3aff93191a484c532e18becc483c44b072ce97b6" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.856348 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:23:43 crc kubenswrapper[4861]: E0310 19:23:43.858336 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3849f7d-1618-43e8-b5c8-903e96bece99" containerName="oc" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.858393 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3849f7d-1618-43e8-b5c8-903e96bece99" containerName="oc" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.858684 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3849f7d-1618-43e8-b5c8-903e96bece99" containerName="oc" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.860553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.874830 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.992441 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.992991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:43 crc kubenswrapper[4861]: I0310 19:23:43.993129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56rq\" (UniqueName: \"kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.094133 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.094224 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.094276 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56rq\" (UniqueName: \"kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.095375 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.095419 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.119447 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56rq\" (UniqueName: \"kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq\") pod \"redhat-operators-zt2g4\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.229544 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.711658 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:23:44 crc kubenswrapper[4861]: I0310 19:23:44.853266 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerStarted","Data":"a6c423e3b0151c2b8596cba4853d6e3d8fd5b1ba77d46c18b3fa8556c12b8ba0"} Mar 10 19:23:45 crc kubenswrapper[4861]: I0310 19:23:45.860952 4861 generic.go:334] "Generic (PLEG): container finished" podID="37467c27-de55-4d13-81db-a1fde78affec" containerID="4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563" exitCode=0 Mar 10 19:23:45 crc kubenswrapper[4861]: I0310 19:23:45.860992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerDied","Data":"4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563"} Mar 10 19:23:46 crc kubenswrapper[4861]: I0310 19:23:46.877179 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerStarted","Data":"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea"} Mar 10 19:23:47 crc kubenswrapper[4861]: I0310 19:23:47.890696 4861 generic.go:334] "Generic (PLEG): container finished" podID="37467c27-de55-4d13-81db-a1fde78affec" containerID="a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea" exitCode=0 Mar 10 19:23:47 crc kubenswrapper[4861]: I0310 19:23:47.890826 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerDied","Data":"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea"} Mar 10 19:23:48 crc kubenswrapper[4861]: I0310 19:23:48.901087 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerStarted","Data":"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f"} Mar 10 19:23:48 crc kubenswrapper[4861]: I0310 19:23:48.920041 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zt2g4" podStartSLOduration=3.382442498 podStartE2EDuration="5.920026008s" podCreationTimestamp="2026-03-10 19:23:43 +0000 UTC" firstStartedPulling="2026-03-10 19:23:45.863355301 +0000 UTC m=+2169.626791261" lastFinishedPulling="2026-03-10 19:23:48.400938781 +0000 UTC m=+2172.164374771" observedRunningTime="2026-03-10 19:23:48.917191044 +0000 UTC m=+2172.680627014" watchObservedRunningTime="2026-03-10 19:23:48.920026008 +0000 UTC m=+2172.683461968" Mar 10 19:23:54 crc kubenswrapper[4861]: I0310 19:23:54.230042 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:54 crc kubenswrapper[4861]: I0310 19:23:54.230858 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:23:55 crc kubenswrapper[4861]: I0310 19:23:55.298746 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zt2g4" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="registry-server" probeResult="failure" output=< Mar 10 19:23:55 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 19:23:55 crc kubenswrapper[4861]: > Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.162293 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552844-m2p9c"] Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.164363 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.167281 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.167982 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.167983 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.170860 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552844-m2p9c"] Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.253661 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wrk\" (UniqueName: \"kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk\") pod \"auto-csr-approver-29552844-m2p9c\" (UID: \"466be0e2-4900-4a79-ba16-62f30de67914\") " pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.355846 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wrk\" (UniqueName: \"kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk\") pod \"auto-csr-approver-29552844-m2p9c\" (UID: \"466be0e2-4900-4a79-ba16-62f30de67914\") " pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.385620 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wrk\" (UniqueName: \"kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk\") pod \"auto-csr-approver-29552844-m2p9c\" (UID: \"466be0e2-4900-4a79-ba16-62f30de67914\") " pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.483784 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:00 crc kubenswrapper[4861]: I0310 19:24:00.737618 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552844-m2p9c"] Mar 10 19:24:00 crc kubenswrapper[4861]: W0310 19:24:00.765665 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466be0e2_4900_4a79_ba16_62f30de67914.slice/crio-18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb WatchSource:0}: Error finding container 18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb: Status 404 returned error can't find the container with id 18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb Mar 10 19:24:01 crc kubenswrapper[4861]: I0310 19:24:01.015990 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" event={"ID":"466be0e2-4900-4a79-ba16-62f30de67914","Type":"ContainerStarted","Data":"18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb"} Mar 10 19:24:02 crc kubenswrapper[4861]: I0310 19:24:02.027823 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" event={"ID":"466be0e2-4900-4a79-ba16-62f30de67914","Type":"ContainerStarted","Data":"12b926404431693794fb2c81c0dbddf60eefa319737c22f7478b418122b8decc"} Mar 10 19:24:02 crc kubenswrapper[4861]: I0310 19:24:02.046236 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" podStartSLOduration=1.1536084309999999 podStartE2EDuration="2.046220338s" podCreationTimestamp="2026-03-10 19:24:00 +0000 UTC" firstStartedPulling="2026-03-10 19:24:00.770051159 +0000 UTC m=+2184.533487159" lastFinishedPulling="2026-03-10 19:24:01.662663076 +0000 UTC m=+2185.426099066" observedRunningTime="2026-03-10 19:24:02.043561968 +0000 UTC m=+2185.806997968" watchObservedRunningTime="2026-03-10 19:24:02.046220338 +0000 UTC m=+2185.809656298" Mar 10 19:24:03 crc kubenswrapper[4861]: I0310 19:24:03.038144 4861 generic.go:334] "Generic (PLEG): container finished" podID="466be0e2-4900-4a79-ba16-62f30de67914" containerID="12b926404431693794fb2c81c0dbddf60eefa319737c22f7478b418122b8decc" exitCode=0 Mar 10 19:24:03 crc kubenswrapper[4861]: I0310 19:24:03.038206 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" event={"ID":"466be0e2-4900-4a79-ba16-62f30de67914","Type":"ContainerDied","Data":"12b926404431693794fb2c81c0dbddf60eefa319737c22f7478b418122b8decc"} Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.329927 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.438061 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.504149 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.583295 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.642351 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74wrk\" (UniqueName: \"kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk\") pod \"466be0e2-4900-4a79-ba16-62f30de67914\" (UID: \"466be0e2-4900-4a79-ba16-62f30de67914\") " Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.650916 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk" (OuterVolumeSpecName: "kube-api-access-74wrk") pod "466be0e2-4900-4a79-ba16-62f30de67914" (UID: "466be0e2-4900-4a79-ba16-62f30de67914"). InnerVolumeSpecName "kube-api-access-74wrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:24:04 crc kubenswrapper[4861]: I0310 19:24:04.745299 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74wrk\" (UniqueName: \"kubernetes.io/projected/466be0e2-4900-4a79-ba16-62f30de67914-kube-api-access-74wrk\") on node \"crc\" DevicePath \"\"" Mar 10 19:24:05 crc kubenswrapper[4861]: I0310 19:24:05.065108 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" Mar 10 19:24:05 crc kubenswrapper[4861]: I0310 19:24:05.065508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552844-m2p9c" event={"ID":"466be0e2-4900-4a79-ba16-62f30de67914","Type":"ContainerDied","Data":"18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb"} Mar 10 19:24:05 crc kubenswrapper[4861]: I0310 19:24:05.065621 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18768f68b5a71b0878b42af0306174ee7ec02d52b5aea955d99ad0878221b2eb" Mar 10 19:24:05 crc kubenswrapper[4861]: I0310 19:24:05.142622 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552838-5l2vc"] Mar 10 19:24:05 crc kubenswrapper[4861]: I0310 19:24:05.153264 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552838-5l2vc"] Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.077425 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zt2g4" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="registry-server" containerID="cri-o://fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f" gracePeriod=2 Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.577631 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.674830 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56rq\" (UniqueName: \"kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq\") pod \"37467c27-de55-4d13-81db-a1fde78affec\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.674964 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities\") pod \"37467c27-de55-4d13-81db-a1fde78affec\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.675062 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content\") pod \"37467c27-de55-4d13-81db-a1fde78affec\" (UID: \"37467c27-de55-4d13-81db-a1fde78affec\") " Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.676549 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities" (OuterVolumeSpecName: "utilities") pod "37467c27-de55-4d13-81db-a1fde78affec" (UID: "37467c27-de55-4d13-81db-a1fde78affec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.680897 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq" (OuterVolumeSpecName: "kube-api-access-m56rq") pod "37467c27-de55-4d13-81db-a1fde78affec" (UID: "37467c27-de55-4d13-81db-a1fde78affec"). InnerVolumeSpecName "kube-api-access-m56rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.776488 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56rq\" (UniqueName: \"kubernetes.io/projected/37467c27-de55-4d13-81db-a1fde78affec-kube-api-access-m56rq\") on node \"crc\" DevicePath \"\"" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.776531 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.849002 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37467c27-de55-4d13-81db-a1fde78affec" (UID: "37467c27-de55-4d13-81db-a1fde78affec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.878338 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37467c27-de55-4d13-81db-a1fde78affec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:24:06 crc kubenswrapper[4861]: I0310 19:24:06.970631 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2ff452-be6e-427e-84e8-87e7107f87a2" path="/var/lib/kubelet/pods/3f2ff452-be6e-427e-84e8-87e7107f87a2/volumes" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.088874 4861 generic.go:334] "Generic (PLEG): container finished" podID="37467c27-de55-4d13-81db-a1fde78affec" containerID="fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f" exitCode=0 Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.088917 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerDied","Data":"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f"} Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.088944 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zt2g4" event={"ID":"37467c27-de55-4d13-81db-a1fde78affec","Type":"ContainerDied","Data":"a6c423e3b0151c2b8596cba4853d6e3d8fd5b1ba77d46c18b3fa8556c12b8ba0"} Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.088960 4861 scope.go:117] "RemoveContainer" containerID="fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.090919 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zt2g4" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.115826 4861 scope.go:117] "RemoveContainer" containerID="a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.123073 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.135823 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zt2g4"] Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.141778 4861 scope.go:117] "RemoveContainer" containerID="4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.170300 4861 scope.go:117] "RemoveContainer" containerID="fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f" Mar 10 19:24:07 crc kubenswrapper[4861]: E0310 19:24:07.170872 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f\": container with ID starting with fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f not found: ID does not exist" containerID="fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.170935 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f"} err="failed to get container status \"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f\": rpc error: code = NotFound desc = could not find container \"fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f\": container with ID starting with fe23e99e1514026e9d82bb18cb8cf986c81331f6ca8d94d41cea323991f0a39f not found: ID does not exist" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.170978 4861 scope.go:117] "RemoveContainer" containerID="a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea" Mar 10 19:24:07 crc kubenswrapper[4861]: E0310 19:24:07.171534 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea\": container with ID starting with a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea not found: ID does not exist" containerID="a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.171653 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea"} err="failed to get container status \"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea\": rpc error: code = NotFound desc = could not find container \"a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea\": container with ID starting with a590410b55da377cf411bb2040f7d386d83093eb19d57c1600d8193bea0b11ea not found: ID does not exist" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.171682 4861 scope.go:117] "RemoveContainer" containerID="4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563" Mar 10 19:24:07 crc kubenswrapper[4861]: E0310 19:24:07.172407 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563\": container with ID starting with 4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563 not found: ID does not exist" containerID="4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563" Mar 10 19:24:07 crc kubenswrapper[4861]: I0310 19:24:07.172457 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563"} err="failed to get container status \"4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563\": rpc error: code = NotFound desc = could not find container \"4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563\": container with ID starting with 4c55d9353f28f4d797fb8ed9e31185adf6bbe084208380418668e8b31073c563 not found: ID does not exist" Mar 10 19:24:08 crc kubenswrapper[4861]: I0310 19:24:08.972424 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37467c27-de55-4d13-81db-a1fde78affec" path="/var/lib/kubelet/pods/37467c27-de55-4d13-81db-a1fde78affec/volumes" Mar 10 19:24:51 crc kubenswrapper[4861]: I0310 19:24:51.991844 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:24:51 crc kubenswrapper[4861]: I0310 19:24:51.992430 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:24:53 crc kubenswrapper[4861]: I0310 19:24:53.640405 4861 scope.go:117] "RemoveContainer" containerID="58cfd3b4243bdeeca17f93579b19eef5333d1cea780b9b326c5f70bda9a525ee" Mar 10 19:25:21 crc kubenswrapper[4861]: I0310 19:25:21.992110 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:25:21 crc kubenswrapper[4861]: I0310 19:25:21.992840 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.441897 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:25:47 crc kubenswrapper[4861]: E0310 19:25:47.446117 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466be0e2-4900-4a79-ba16-62f30de67914" containerName="oc" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.446316 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="466be0e2-4900-4a79-ba16-62f30de67914" containerName="oc" Mar 10 19:25:47 crc kubenswrapper[4861]: E0310 19:25:47.446518 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="extract-content" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.446644 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="extract-content" Mar 10 19:25:47 crc kubenswrapper[4861]: E0310 19:25:47.446800 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="registry-server" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.446928 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="registry-server" Mar 10 19:25:47 crc kubenswrapper[4861]: E0310 19:25:47.447322 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="extract-utilities" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.447445 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="extract-utilities" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.448011 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="466be0e2-4900-4a79-ba16-62f30de67914" containerName="oc" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.448273 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="37467c27-de55-4d13-81db-a1fde78affec" containerName="registry-server" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.450406 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.456336 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.539555 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.539637 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.539955 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8tz\" (UniqueName: \"kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.642126 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8tz\" (UniqueName: \"kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.642270 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.642312 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.642951 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.643305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.684672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8tz\" (UniqueName: \"kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz\") pod \"redhat-marketplace-w6r8m\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:47 crc kubenswrapper[4861]: I0310 19:25:47.830193 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:48 crc kubenswrapper[4861]: I0310 19:25:48.338140 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.042564 4861 generic.go:334] "Generic (PLEG): container finished" podID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerID="743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39" exitCode=0 Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.042646 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerDied","Data":"743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39"} Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.043035 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerStarted","Data":"174995745fd7843be383f6b6f5899cbb81e37ec5debaf656dc44e5cb223dbf5a"} Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.046836 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.833114 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.835419 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.899899 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.985784 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfp2g\" (UniqueName: \"kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.986162 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:49 crc kubenswrapper[4861]: I0310 19:25:49.986227 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.053559 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerStarted","Data":"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2"} Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.087678 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.088902 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.091354 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfp2g\" (UniqueName: \"kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.092443 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.093191 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.114924 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfp2g\" (UniqueName: \"kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g\") pod \"certified-operators-txwrx\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.191499 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:25:50 crc kubenswrapper[4861]: I0310 19:25:50.661006 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.065507 4861 generic.go:334] "Generic (PLEG): container finished" podID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerID="2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf" exitCode=0 Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.065623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerDied","Data":"2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf"} Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.065700 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerStarted","Data":"8a1e37624c628ac8d458d1d08aa5ac1232534433b1f2dbe565b37585f8ff6bd2"} Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.069860 4861 generic.go:334] "Generic (PLEG): container finished" podID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerID="cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2" exitCode=0 Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.069924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerDied","Data":"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2"} Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.992113 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.992471 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.992527 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.993426 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:25:51 crc kubenswrapper[4861]: I0310 19:25:51.993544 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9" gracePeriod=600 Mar 10 19:25:52 crc kubenswrapper[4861]: I0310 19:25:52.081506 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerStarted","Data":"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39"} Mar 10 19:25:52 crc kubenswrapper[4861]: I0310 19:25:52.086682 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerStarted","Data":"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea"} Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.099251 4861 generic.go:334] "Generic (PLEG): container finished" podID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerID="7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39" exitCode=0 Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.099613 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerDied","Data":"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39"} Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.109429 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9" exitCode=0 Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.109503 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9"} Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.109563 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48"} Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.109630 4861 scope.go:117] "RemoveContainer" containerID="bb8a532e73b13a25b9b100c2f6a1c525fcee26bf3c9ef709c2f4ca6e8ba75c53" Mar 10 19:25:53 crc kubenswrapper[4861]: I0310 19:25:53.138234 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6r8m" podStartSLOduration=3.629854603 podStartE2EDuration="6.138206609s" podCreationTimestamp="2026-03-10 19:25:47 +0000 UTC" firstStartedPulling="2026-03-10 19:25:49.046105644 +0000 UTC m=+2292.809541634" lastFinishedPulling="2026-03-10 19:25:51.55445765 +0000 UTC m=+2295.317893640" observedRunningTime="2026-03-10 19:25:52.148060793 +0000 UTC m=+2295.911496783" watchObservedRunningTime="2026-03-10 19:25:53.138206609 +0000 UTC m=+2296.901642609" Mar 10 19:25:54 crc kubenswrapper[4861]: I0310 19:25:54.142412 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerStarted","Data":"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7"} Mar 10 19:25:54 crc kubenswrapper[4861]: I0310 19:25:54.172735 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txwrx" podStartSLOduration=2.416367422 podStartE2EDuration="5.172703565s" podCreationTimestamp="2026-03-10 19:25:49 +0000 UTC" firstStartedPulling="2026-03-10 19:25:51.06746641 +0000 UTC m=+2294.830902400" lastFinishedPulling="2026-03-10 19:25:53.823802543 +0000 UTC m=+2297.587238543" observedRunningTime="2026-03-10 19:25:54.165542078 +0000 UTC m=+2297.928978048" watchObservedRunningTime="2026-03-10 19:25:54.172703565 +0000 UTC m=+2297.936139535" Mar 10 19:25:57 crc kubenswrapper[4861]: I0310 19:25:57.830804 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:57 crc kubenswrapper[4861]: I0310 19:25:57.832074 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:57 crc kubenswrapper[4861]: I0310 19:25:57.904740 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:58 crc kubenswrapper[4861]: I0310 19:25:58.257041 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:25:59 crc kubenswrapper[4861]: I0310 19:25:59.424083 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.166605 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552846-dspw8"] Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.168636 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.174783 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.174861 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.175183 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.184547 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552846-dspw8"] Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.192217 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.192753 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.195943 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4c4p\" (UniqueName: \"kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p\") pod \"auto-csr-approver-29552846-dspw8\" (UID: \"942e4526-20e6-4232-849a-20a982e545ab\") " pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.203894 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6r8m" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="registry-server" containerID="cri-o://aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea" gracePeriod=2 Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.262635 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.298185 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4c4p\" (UniqueName: \"kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p\") pod \"auto-csr-approver-29552846-dspw8\" (UID: \"942e4526-20e6-4232-849a-20a982e545ab\") " pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.327824 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4c4p\" (UniqueName: \"kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p\") pod \"auto-csr-approver-29552846-dspw8\" (UID: \"942e4526-20e6-4232-849a-20a982e545ab\") " pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.518564 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.654247 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.772465 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552846-dspw8"] Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.808056 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8tz\" (UniqueName: \"kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz\") pod \"6593768e-24f6-47c2-ba9f-6a200a05931a\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.808155 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities\") pod \"6593768e-24f6-47c2-ba9f-6a200a05931a\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.809011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities" (OuterVolumeSpecName: "utilities") pod "6593768e-24f6-47c2-ba9f-6a200a05931a" (UID: "6593768e-24f6-47c2-ba9f-6a200a05931a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.809152 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content\") pod \"6593768e-24f6-47c2-ba9f-6a200a05931a\" (UID: \"6593768e-24f6-47c2-ba9f-6a200a05931a\") " Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.809537 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.816864 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz" (OuterVolumeSpecName: "kube-api-access-gl8tz") pod "6593768e-24f6-47c2-ba9f-6a200a05931a" (UID: "6593768e-24f6-47c2-ba9f-6a200a05931a"). InnerVolumeSpecName "kube-api-access-gl8tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.834109 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6593768e-24f6-47c2-ba9f-6a200a05931a" (UID: "6593768e-24f6-47c2-ba9f-6a200a05931a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.910433 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6593768e-24f6-47c2-ba9f-6a200a05931a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:00 crc kubenswrapper[4861]: I0310 19:26:00.910473 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8tz\" (UniqueName: \"kubernetes.io/projected/6593768e-24f6-47c2-ba9f-6a200a05931a-kube-api-access-gl8tz\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.212079 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552846-dspw8" event={"ID":"942e4526-20e6-4232-849a-20a982e545ab","Type":"ContainerStarted","Data":"8c5684841d8437e9ff1492a61a6789324eeb82f1007d1529b7211e0ad00124f0"} Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.215156 4861 generic.go:334] "Generic (PLEG): container finished" podID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerID="aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea" exitCode=0 Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.215191 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerDied","Data":"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea"} Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.215226 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6r8m" event={"ID":"6593768e-24f6-47c2-ba9f-6a200a05931a","Type":"ContainerDied","Data":"174995745fd7843be383f6b6f5899cbb81e37ec5debaf656dc44e5cb223dbf5a"} Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.215250 4861 scope.go:117] "RemoveContainer" containerID="aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.215258 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6r8m" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.243654 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.251289 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6r8m"] Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.254159 4861 scope.go:117] "RemoveContainer" containerID="cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.278420 4861 scope.go:117] "RemoveContainer" containerID="743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308088 4861 scope.go:117] "RemoveContainer" containerID="aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308364 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:01 crc kubenswrapper[4861]: E0310 19:26:01.308423 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea\": container with ID starting with aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea not found: ID does not exist" containerID="aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308465 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea"} err="failed to get container status \"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea\": rpc error: code = NotFound desc = could not find container \"aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea\": container with ID starting with aa03cfe21a859aebbf59ed4062822af3a01592490bda8b9ecc20d42045cb4eea not found: ID does not exist" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308494 4861 scope.go:117] "RemoveContainer" containerID="cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2" Mar 10 19:26:01 crc kubenswrapper[4861]: E0310 19:26:01.308813 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2\": container with ID starting with cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2 not found: ID does not exist" containerID="cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308841 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2"} err="failed to get container status \"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2\": rpc error: code = NotFound desc = could not find container \"cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2\": container with ID starting with cbda4390185c7dd87c20da37e22b24286963f5c5de7fba7c71cff1f73ff41ab2 not found: ID does not exist" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.308859 4861 scope.go:117] "RemoveContainer" containerID="743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39" Mar 10 19:26:01 crc kubenswrapper[4861]: E0310 19:26:01.310006 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39\": container with ID starting with 743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39 not found: ID does not exist" containerID="743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39" Mar 10 19:26:01 crc kubenswrapper[4861]: I0310 19:26:01.310040 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39"} err="failed to get container status \"743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39\": rpc error: code = NotFound desc = could not find container \"743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39\": container with ID starting with 743c56707fb29fc7a03490b85cb9b43c3cb7147ea4f9e3d3407a426c98d34c39 not found: ID does not exist" Mar 10 19:26:02 crc kubenswrapper[4861]: I0310 19:26:02.229969 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552846-dspw8" event={"ID":"942e4526-20e6-4232-849a-20a982e545ab","Type":"ContainerStarted","Data":"a006e0f5eff4008c7463e9b545eb4b2273ed323728102b2a34233eb444ee26bb"} Mar 10 19:26:02 crc kubenswrapper[4861]: I0310 19:26:02.263836 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552846-dspw8" podStartSLOduration=1.326323111 podStartE2EDuration="2.263799285s" podCreationTimestamp="2026-03-10 19:26:00 +0000 UTC" firstStartedPulling="2026-03-10 19:26:00.778715912 +0000 UTC m=+2304.542151872" lastFinishedPulling="2026-03-10 19:26:01.716192056 +0000 UTC m=+2305.479628046" observedRunningTime="2026-03-10 19:26:02.252052607 +0000 UTC m=+2306.015488607" watchObservedRunningTime="2026-03-10 19:26:02.263799285 +0000 UTC m=+2306.027235285" Mar 10 19:26:02 crc kubenswrapper[4861]: I0310 19:26:02.620479 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:26:02 crc kubenswrapper[4861]: I0310 19:26:02.976049 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" path="/var/lib/kubelet/pods/6593768e-24f6-47c2-ba9f-6a200a05931a/volumes" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.242645 4861 generic.go:334] "Generic (PLEG): container finished" podID="942e4526-20e6-4232-849a-20a982e545ab" containerID="a006e0f5eff4008c7463e9b545eb4b2273ed323728102b2a34233eb444ee26bb" exitCode=0 Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.242797 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552846-dspw8" event={"ID":"942e4526-20e6-4232-849a-20a982e545ab","Type":"ContainerDied","Data":"a006e0f5eff4008c7463e9b545eb4b2273ed323728102b2a34233eb444ee26bb"} Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.243973 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txwrx" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="registry-server" containerID="cri-o://dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7" gracePeriod=2 Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.792974 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.857904 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfp2g\" (UniqueName: \"kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g\") pod \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.858198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities\") pod \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.858248 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content\") pod \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\" (UID: \"cc3075bb-f853-4253-b49b-c98b2ddf55e2\") " Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.859031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities" (OuterVolumeSpecName: "utilities") pod "cc3075bb-f853-4253-b49b-c98b2ddf55e2" (UID: "cc3075bb-f853-4253-b49b-c98b2ddf55e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.867289 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g" (OuterVolumeSpecName: "kube-api-access-mfp2g") pod "cc3075bb-f853-4253-b49b-c98b2ddf55e2" (UID: "cc3075bb-f853-4253-b49b-c98b2ddf55e2"). InnerVolumeSpecName "kube-api-access-mfp2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.936905 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc3075bb-f853-4253-b49b-c98b2ddf55e2" (UID: "cc3075bb-f853-4253-b49b-c98b2ddf55e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.959982 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.960059 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3075bb-f853-4253-b49b-c98b2ddf55e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:03 crc kubenswrapper[4861]: I0310 19:26:03.960081 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfp2g\" (UniqueName: \"kubernetes.io/projected/cc3075bb-f853-4253-b49b-c98b2ddf55e2-kube-api-access-mfp2g\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.256015 4861 generic.go:334] "Generic (PLEG): container finished" podID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerID="dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7" exitCode=0 Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.256118 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwrx" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.256151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerDied","Data":"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7"} Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.256245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwrx" event={"ID":"cc3075bb-f853-4253-b49b-c98b2ddf55e2","Type":"ContainerDied","Data":"8a1e37624c628ac8d458d1d08aa5ac1232534433b1f2dbe565b37585f8ff6bd2"} Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.256277 4861 scope.go:117] "RemoveContainer" containerID="dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.291169 4861 scope.go:117] "RemoveContainer" containerID="7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.316796 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.325901 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txwrx"] Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.342644 4861 scope.go:117] "RemoveContainer" containerID="2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.375335 4861 scope.go:117] "RemoveContainer" containerID="dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7" Mar 10 19:26:04 crc kubenswrapper[4861]: E0310 19:26:04.375908 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7\": container with ID starting with dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7 not found: ID does not exist" containerID="dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.375966 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7"} err="failed to get container status \"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7\": rpc error: code = NotFound desc = could not find container \"dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7\": container with ID starting with dd22f60080f5ea7ee6457ac79d38c05503954e129cf89fa8d56653c7a9c04fc7 not found: ID does not exist" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.375997 4861 scope.go:117] "RemoveContainer" containerID="7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39" Mar 10 19:26:04 crc kubenswrapper[4861]: E0310 19:26:04.376498 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39\": container with ID starting with 7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39 not found: ID does not exist" containerID="7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.376569 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39"} err="failed to get container status \"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39\": rpc error: code = NotFound desc = could not find container \"7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39\": container with ID starting with 7cb9f154fb02c4196488a080463946091bebdfba19a312549a4d80d0acce0f39 not found: ID does not exist" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.376612 4861 scope.go:117] "RemoveContainer" containerID="2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf" Mar 10 19:26:04 crc kubenswrapper[4861]: E0310 19:26:04.377032 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf\": container with ID starting with 2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf not found: ID does not exist" containerID="2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.377082 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf"} err="failed to get container status \"2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf\": rpc error: code = NotFound desc = could not find container \"2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf\": container with ID starting with 2ee03bcfbb5e4a21d8b2cb6be5363b405ae693c51c80f0022e63336d94aebfaf not found: ID does not exist" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.671533 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.773740 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4c4p\" (UniqueName: \"kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p\") pod \"942e4526-20e6-4232-849a-20a982e545ab\" (UID: \"942e4526-20e6-4232-849a-20a982e545ab\") " Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.781250 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p" (OuterVolumeSpecName: "kube-api-access-f4c4p") pod "942e4526-20e6-4232-849a-20a982e545ab" (UID: "942e4526-20e6-4232-849a-20a982e545ab"). InnerVolumeSpecName "kube-api-access-f4c4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.876045 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4c4p\" (UniqueName: \"kubernetes.io/projected/942e4526-20e6-4232-849a-20a982e545ab-kube-api-access-f4c4p\") on node \"crc\" DevicePath \"\"" Mar 10 19:26:04 crc kubenswrapper[4861]: I0310 19:26:04.976189 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" path="/var/lib/kubelet/pods/cc3075bb-f853-4253-b49b-c98b2ddf55e2/volumes" Mar 10 19:26:05 crc kubenswrapper[4861]: I0310 19:26:05.276794 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552846-dspw8" event={"ID":"942e4526-20e6-4232-849a-20a982e545ab","Type":"ContainerDied","Data":"8c5684841d8437e9ff1492a61a6789324eeb82f1007d1529b7211e0ad00124f0"} Mar 10 19:26:05 crc kubenswrapper[4861]: I0310 19:26:05.276856 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5684841d8437e9ff1492a61a6789324eeb82f1007d1529b7211e0ad00124f0" Mar 10 19:26:05 crc kubenswrapper[4861]: I0310 19:26:05.276925 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552846-dspw8" Mar 10 19:26:05 crc kubenswrapper[4861]: I0310 19:26:05.360391 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552840-lrsfm"] Mar 10 19:26:05 crc kubenswrapper[4861]: I0310 19:26:05.371596 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552840-lrsfm"] Mar 10 19:26:06 crc kubenswrapper[4861]: I0310 19:26:06.973614 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5e8bdf-e700-4d70-9054-f91863ac0eae" path="/var/lib/kubelet/pods/0b5e8bdf-e700-4d70-9054-f91863ac0eae/volumes" Mar 10 19:26:53 crc kubenswrapper[4861]: I0310 19:26:53.788510 4861 scope.go:117] "RemoveContainer" containerID="664ca4596dba71c5845a9cb615748f525990fb04d5d37fc0149f009f2fcbacd8" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.157761 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552848-2hj89"] Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159039 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="extract-utilities" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159064 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="extract-utilities" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159091 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159104 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159130 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="extract-utilities" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159143 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="extract-utilities" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159163 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159175 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159330 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="extract-content" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159346 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="extract-content" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159365 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="extract-content" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159377 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="extract-content" Mar 10 19:28:00 crc kubenswrapper[4861]: E0310 19:28:00.159409 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942e4526-20e6-4232-849a-20a982e545ab" containerName="oc" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159422 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="942e4526-20e6-4232-849a-20a982e545ab" containerName="oc" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159664 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6593768e-24f6-47c2-ba9f-6a200a05931a" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159687 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="942e4526-20e6-4232-849a-20a982e545ab" containerName="oc" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.159748 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3075bb-f853-4253-b49b-c98b2ddf55e2" containerName="registry-server" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.160448 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.163037 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.163418 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.163572 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.176781 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552848-2hj89"] Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.301862 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9phd\" (UniqueName: \"kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd\") pod \"auto-csr-approver-29552848-2hj89\" (UID: \"6f01252a-e9d2-4f92-a489-60509ea2cbfb\") " pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.403251 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9phd\" (UniqueName: \"kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd\") pod \"auto-csr-approver-29552848-2hj89\" (UID: \"6f01252a-e9d2-4f92-a489-60509ea2cbfb\") " pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.436275 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9phd\" (UniqueName: \"kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd\") pod \"auto-csr-approver-29552848-2hj89\" (UID: \"6f01252a-e9d2-4f92-a489-60509ea2cbfb\") " pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:00 crc kubenswrapper[4861]: I0310 19:28:00.525029 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:01 crc kubenswrapper[4861]: I0310 19:28:01.069208 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552848-2hj89"] Mar 10 19:28:01 crc kubenswrapper[4861]: W0310 19:28:01.083982 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f01252a_e9d2_4f92_a489_60509ea2cbfb.slice/crio-c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89 WatchSource:0}: Error finding container c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89: Status 404 returned error can't find the container with id c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89 Mar 10 19:28:01 crc kubenswrapper[4861]: I0310 19:28:01.794788 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552848-2hj89" event={"ID":"6f01252a-e9d2-4f92-a489-60509ea2cbfb","Type":"ContainerStarted","Data":"c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89"} Mar 10 19:28:02 crc kubenswrapper[4861]: I0310 19:28:02.805320 4861 generic.go:334] "Generic (PLEG): container finished" podID="6f01252a-e9d2-4f92-a489-60509ea2cbfb" containerID="dd72b4a742e08b86d86d2e0126a7f7e2f1d25cd5be1132983444e2033feb53df" exitCode=0 Mar 10 19:28:02 crc kubenswrapper[4861]: I0310 19:28:02.805374 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552848-2hj89" event={"ID":"6f01252a-e9d2-4f92-a489-60509ea2cbfb","Type":"ContainerDied","Data":"dd72b4a742e08b86d86d2e0126a7f7e2f1d25cd5be1132983444e2033feb53df"} Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.228382 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.365182 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9phd\" (UniqueName: \"kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd\") pod \"6f01252a-e9d2-4f92-a489-60509ea2cbfb\" (UID: \"6f01252a-e9d2-4f92-a489-60509ea2cbfb\") " Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.376097 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd" (OuterVolumeSpecName: "kube-api-access-c9phd") pod "6f01252a-e9d2-4f92-a489-60509ea2cbfb" (UID: "6f01252a-e9d2-4f92-a489-60509ea2cbfb"). InnerVolumeSpecName "kube-api-access-c9phd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.467880 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9phd\" (UniqueName: \"kubernetes.io/projected/6f01252a-e9d2-4f92-a489-60509ea2cbfb-kube-api-access-c9phd\") on node \"crc\" DevicePath \"\"" Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.827184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552848-2hj89" event={"ID":"6f01252a-e9d2-4f92-a489-60509ea2cbfb","Type":"ContainerDied","Data":"c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89"} Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.827247 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55c9f933daef5958ce3fcac3c7f226894d2df71a80f56ba6822af1da42f4d89" Mar 10 19:28:04 crc kubenswrapper[4861]: I0310 19:28:04.827330 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552848-2hj89" Mar 10 19:28:05 crc kubenswrapper[4861]: I0310 19:28:05.322552 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552842-hsc2l"] Mar 10 19:28:05 crc kubenswrapper[4861]: I0310 19:28:05.331753 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552842-hsc2l"] Mar 10 19:28:06 crc kubenswrapper[4861]: I0310 19:28:06.974604 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3849f7d-1618-43e8-b5c8-903e96bece99" path="/var/lib/kubelet/pods/f3849f7d-1618-43e8-b5c8-903e96bece99/volumes" Mar 10 19:28:21 crc kubenswrapper[4861]: I0310 19:28:21.991975 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:28:21 crc kubenswrapper[4861]: I0310 19:28:21.992746 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:28:51 crc kubenswrapper[4861]: I0310 19:28:51.991877 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:28:51 crc kubenswrapper[4861]: I0310 19:28:51.992762 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:28:53 crc kubenswrapper[4861]: I0310 19:28:53.951086 4861 scope.go:117] "RemoveContainer" containerID="572bc5c02f5f758406e7180080fa4e3d6ed8f8aa6c3631fe7b07b80d8c588238" Mar 10 19:29:21 crc kubenswrapper[4861]: I0310 19:29:21.992410 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:29:21 crc kubenswrapper[4861]: I0310 19:29:21.993008 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:29:21 crc kubenswrapper[4861]: I0310 19:29:21.993067 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:29:21 crc kubenswrapper[4861]: I0310 19:29:21.993940 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:29:21 crc kubenswrapper[4861]: I0310 19:29:21.994023 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" gracePeriod=600 Mar 10 19:29:22 crc kubenswrapper[4861]: E0310 19:29:22.153357 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:29:22 crc kubenswrapper[4861]: I0310 19:29:22.711986 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" exitCode=0 Mar 10 19:29:22 crc kubenswrapper[4861]: I0310 19:29:22.712130 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48"} Mar 10 19:29:22 crc kubenswrapper[4861]: I0310 19:29:22.712372 4861 scope.go:117] "RemoveContainer" containerID="9e8ec457e3a6bb6b7db7ecb4612b2a3c3581fafcd471645016dbd898fb33f4d9" Mar 10 19:29:22 crc kubenswrapper[4861]: I0310 19:29:22.713143 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:29:22 crc kubenswrapper[4861]: E0310 19:29:22.713489 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:29:33 crc kubenswrapper[4861]: I0310 19:29:33.958676 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:29:33 crc kubenswrapper[4861]: E0310 19:29:33.959452 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:29:47 crc kubenswrapper[4861]: I0310 19:29:47.958513 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:29:47 crc kubenswrapper[4861]: E0310 19:29:47.959535 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.867871 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:29:57 crc kubenswrapper[4861]: E0310 19:29:57.869313 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f01252a-e9d2-4f92-a489-60509ea2cbfb" containerName="oc" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.869344 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f01252a-e9d2-4f92-a489-60509ea2cbfb" containerName="oc" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.869612 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f01252a-e9d2-4f92-a489-60509ea2cbfb" containerName="oc" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.871406 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.885153 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.904371 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.904576 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:57 crc kubenswrapper[4861]: I0310 19:29:57.904634 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd8r\" (UniqueName: \"kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.006254 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.006332 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnd8r\" (UniqueName: \"kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.006472 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.007158 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.007278 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.033481 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnd8r\" (UniqueName: \"kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r\") pod \"community-operators-8ff86\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.191242 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:29:58 crc kubenswrapper[4861]: I0310 19:29:58.663882 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:29:59 crc kubenswrapper[4861]: I0310 19:29:59.062248 4861 generic.go:334] "Generic (PLEG): container finished" podID="74905e72-7705-4b35-94da-8b57687ddd91" containerID="17cd08c74e3c7765f2bf7c129b3b18b05abb5bafa559f877ef70bb58bf61cda5" exitCode=0 Mar 10 19:29:59 crc kubenswrapper[4861]: I0310 19:29:59.062335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerDied","Data":"17cd08c74e3c7765f2bf7c129b3b18b05abb5bafa559f877ef70bb58bf61cda5"} Mar 10 19:29:59 crc kubenswrapper[4861]: I0310 19:29:59.062603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerStarted","Data":"dce5b9cf2e1af854bf21a1cd9e6ef6a4312487695ad945682b850beca6cc3655"} Mar 10 19:29:59 crc kubenswrapper[4861]: I0310 19:29:59.958483 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:29:59 crc kubenswrapper[4861]: E0310 19:29:59.958967 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.071739 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerStarted","Data":"b706770856a4c708f2ce9447d6b78569934de70767dd1f2b32dd45596dee10d8"} Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.154010 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552850-k4hm4"] Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.154933 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.157240 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.160104 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552850-k4hm4"] Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.202407 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.202522 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.252175 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m948r\" (UniqueName: \"kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r\") pod \"auto-csr-approver-29552850-k4hm4\" (UID: \"1214934e-2f2b-4a1e-be24-448ed78bbb1a\") " pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.271695 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv"] Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.273064 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.275873 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.275987 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.283387 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv"] Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.353684 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.353769 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.353862 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbwl\" (UniqueName: \"kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.353897 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m948r\" (UniqueName: \"kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r\") pod \"auto-csr-approver-29552850-k4hm4\" (UID: \"1214934e-2f2b-4a1e-be24-448ed78bbb1a\") " pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.378098 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m948r\" (UniqueName: \"kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r\") pod \"auto-csr-approver-29552850-k4hm4\" (UID: \"1214934e-2f2b-4a1e-be24-448ed78bbb1a\") " pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.455412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbwl\" (UniqueName: \"kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.455547 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.455675 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.457512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.461202 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.487142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbwl\" (UniqueName: \"kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl\") pod \"collect-profiles-29552850-8nzzv\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.586625 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:00 crc kubenswrapper[4861]: I0310 19:30:00.594566 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:01 crc kubenswrapper[4861]: I0310 19:30:01.082029 4861 generic.go:334] "Generic (PLEG): container finished" podID="74905e72-7705-4b35-94da-8b57687ddd91" containerID="b706770856a4c708f2ce9447d6b78569934de70767dd1f2b32dd45596dee10d8" exitCode=0 Mar 10 19:30:01 crc kubenswrapper[4861]: I0310 19:30:01.082289 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerDied","Data":"b706770856a4c708f2ce9447d6b78569934de70767dd1f2b32dd45596dee10d8"} Mar 10 19:30:01 crc kubenswrapper[4861]: I0310 19:30:01.127793 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552850-k4hm4"] Mar 10 19:30:01 crc kubenswrapper[4861]: W0310 19:30:01.145549 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab095e33_d713_4053_a55a_e7da9a9d1587.slice/crio-ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0 WatchSource:0}: Error finding container ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0: Status 404 returned error can't find the container with id ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0 Mar 10 19:30:01 crc kubenswrapper[4861]: I0310 19:30:01.157541 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv"] Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.091702 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" event={"ID":"1214934e-2f2b-4a1e-be24-448ed78bbb1a","Type":"ContainerStarted","Data":"f86b510d590fee60628174db84dc6cf866cf362c26fb3697cc328ef8dfd4ae59"} Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.095253 4861 generic.go:334] "Generic (PLEG): container finished" podID="ab095e33-d713-4053-a55a-e7da9a9d1587" containerID="b54c280a869472f44fa9a43c1c742443d37a059381310a2c6315ca52d30d7667" exitCode=0 Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.095317 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" event={"ID":"ab095e33-d713-4053-a55a-e7da9a9d1587","Type":"ContainerDied","Data":"b54c280a869472f44fa9a43c1c742443d37a059381310a2c6315ca52d30d7667"} Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.095396 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" event={"ID":"ab095e33-d713-4053-a55a-e7da9a9d1587","Type":"ContainerStarted","Data":"ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0"} Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.099567 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerStarted","Data":"5f7f57d8ca3d57643ec668c9affe97becad7791a3fc23dbd360e90469c2b1ce5"} Mar 10 19:30:02 crc kubenswrapper[4861]: I0310 19:30:02.152135 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ff86" podStartSLOduration=2.460126769 podStartE2EDuration="5.152107195s" podCreationTimestamp="2026-03-10 19:29:57 +0000 UTC" firstStartedPulling="2026-03-10 19:29:59.064326968 +0000 UTC m=+2542.827762958" lastFinishedPulling="2026-03-10 19:30:01.756307384 +0000 UTC m=+2545.519743384" observedRunningTime="2026-03-10 19:30:02.145119125 +0000 UTC m=+2545.908555125" watchObservedRunningTime="2026-03-10 19:30:02.152107195 +0000 UTC m=+2545.915543195" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.414339 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.541530 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume\") pod \"ab095e33-d713-4053-a55a-e7da9a9d1587\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.541660 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume\") pod \"ab095e33-d713-4053-a55a-e7da9a9d1587\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.541761 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljbwl\" (UniqueName: \"kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl\") pod \"ab095e33-d713-4053-a55a-e7da9a9d1587\" (UID: \"ab095e33-d713-4053-a55a-e7da9a9d1587\") " Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.542447 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab095e33-d713-4053-a55a-e7da9a9d1587" (UID: "ab095e33-d713-4053-a55a-e7da9a9d1587"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.546980 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl" (OuterVolumeSpecName: "kube-api-access-ljbwl") pod "ab095e33-d713-4053-a55a-e7da9a9d1587" (UID: "ab095e33-d713-4053-a55a-e7da9a9d1587"). InnerVolumeSpecName "kube-api-access-ljbwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.547558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab095e33-d713-4053-a55a-e7da9a9d1587" (UID: "ab095e33-d713-4053-a55a-e7da9a9d1587"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.642867 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljbwl\" (UniqueName: \"kubernetes.io/projected/ab095e33-d713-4053-a55a-e7da9a9d1587-kube-api-access-ljbwl\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.642901 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab095e33-d713-4053-a55a-e7da9a9d1587-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:03 crc kubenswrapper[4861]: I0310 19:30:03.642914 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab095e33-d713-4053-a55a-e7da9a9d1587-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.143565 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.143564 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv" event={"ID":"ab095e33-d713-4053-a55a-e7da9a9d1587","Type":"ContainerDied","Data":"ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0"} Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.143778 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec09802a6d5f648b6c86eda6079cad1bbecacc6915ecfedcf537f04fa16eeca0" Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.146225 4861 generic.go:334] "Generic (PLEG): container finished" podID="1214934e-2f2b-4a1e-be24-448ed78bbb1a" containerID="43393281e71df51937a953c595800fbb10742aee2b0dca908db48bb954d216de" exitCode=0 Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.146299 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" event={"ID":"1214934e-2f2b-4a1e-be24-448ed78bbb1a","Type":"ContainerDied","Data":"43393281e71df51937a953c595800fbb10742aee2b0dca908db48bb954d216de"} Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.478850 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k"] Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.483425 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552805-vbm4k"] Mar 10 19:30:04 crc kubenswrapper[4861]: I0310 19:30:04.973027 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e835c42f-7b8a-45a3-a153-ffab9b5386e0" path="/var/lib/kubelet/pods/e835c42f-7b8a-45a3-a153-ffab9b5386e0/volumes" Mar 10 19:30:05 crc kubenswrapper[4861]: I0310 19:30:05.496299 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:05 crc kubenswrapper[4861]: I0310 19:30:05.671526 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m948r\" (UniqueName: \"kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r\") pod \"1214934e-2f2b-4a1e-be24-448ed78bbb1a\" (UID: \"1214934e-2f2b-4a1e-be24-448ed78bbb1a\") " Mar 10 19:30:05 crc kubenswrapper[4861]: I0310 19:30:05.680986 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r" (OuterVolumeSpecName: "kube-api-access-m948r") pod "1214934e-2f2b-4a1e-be24-448ed78bbb1a" (UID: "1214934e-2f2b-4a1e-be24-448ed78bbb1a"). InnerVolumeSpecName "kube-api-access-m948r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:30:05 crc kubenswrapper[4861]: I0310 19:30:05.774349 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m948r\" (UniqueName: \"kubernetes.io/projected/1214934e-2f2b-4a1e-be24-448ed78bbb1a-kube-api-access-m948r\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.166453 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" event={"ID":"1214934e-2f2b-4a1e-be24-448ed78bbb1a","Type":"ContainerDied","Data":"f86b510d590fee60628174db84dc6cf866cf362c26fb3697cc328ef8dfd4ae59"} Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.166850 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86b510d590fee60628174db84dc6cf866cf362c26fb3697cc328ef8dfd4ae59" Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.166570 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552850-k4hm4" Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.569066 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552844-m2p9c"] Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.578526 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552844-m2p9c"] Mar 10 19:30:06 crc kubenswrapper[4861]: I0310 19:30:06.983544 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466be0e2-4900-4a79-ba16-62f30de67914" path="/var/lib/kubelet/pods/466be0e2-4900-4a79-ba16-62f30de67914/volumes" Mar 10 19:30:08 crc kubenswrapper[4861]: I0310 19:30:08.191512 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:08 crc kubenswrapper[4861]: I0310 19:30:08.191563 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:08 crc kubenswrapper[4861]: I0310 19:30:08.267539 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:09 crc kubenswrapper[4861]: I0310 19:30:09.252683 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:09 crc kubenswrapper[4861]: I0310 19:30:09.317177 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:30:11 crc kubenswrapper[4861]: I0310 19:30:11.215006 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ff86" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="registry-server" containerID="cri-o://5f7f57d8ca3d57643ec668c9affe97becad7791a3fc23dbd360e90469c2b1ce5" gracePeriod=2 Mar 10 19:30:11 crc kubenswrapper[4861]: I0310 19:30:11.958487 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:30:11 crc kubenswrapper[4861]: E0310 19:30:11.959074 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.229108 4861 generic.go:334] "Generic (PLEG): container finished" podID="74905e72-7705-4b35-94da-8b57687ddd91" containerID="5f7f57d8ca3d57643ec668c9affe97becad7791a3fc23dbd360e90469c2b1ce5" exitCode=0 Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.229173 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerDied","Data":"5f7f57d8ca3d57643ec668c9affe97becad7791a3fc23dbd360e90469c2b1ce5"} Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.559138 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.692767 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnd8r\" (UniqueName: \"kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r\") pod \"74905e72-7705-4b35-94da-8b57687ddd91\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.692914 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities\") pod \"74905e72-7705-4b35-94da-8b57687ddd91\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.693039 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content\") pod \"74905e72-7705-4b35-94da-8b57687ddd91\" (UID: \"74905e72-7705-4b35-94da-8b57687ddd91\") " Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.694187 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities" (OuterVolumeSpecName: "utilities") pod "74905e72-7705-4b35-94da-8b57687ddd91" (UID: "74905e72-7705-4b35-94da-8b57687ddd91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.700769 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r" (OuterVolumeSpecName: "kube-api-access-lnd8r") pod "74905e72-7705-4b35-94da-8b57687ddd91" (UID: "74905e72-7705-4b35-94da-8b57687ddd91"). InnerVolumeSpecName "kube-api-access-lnd8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.796821 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.797001 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnd8r\" (UniqueName: \"kubernetes.io/projected/74905e72-7705-4b35-94da-8b57687ddd91-kube-api-access-lnd8r\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.806583 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74905e72-7705-4b35-94da-8b57687ddd91" (UID: "74905e72-7705-4b35-94da-8b57687ddd91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:30:12 crc kubenswrapper[4861]: I0310 19:30:12.898854 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74905e72-7705-4b35-94da-8b57687ddd91-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.242925 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ff86" event={"ID":"74905e72-7705-4b35-94da-8b57687ddd91","Type":"ContainerDied","Data":"dce5b9cf2e1af854bf21a1cd9e6ef6a4312487695ad945682b850beca6cc3655"} Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.242997 4861 scope.go:117] "RemoveContainer" containerID="5f7f57d8ca3d57643ec668c9affe97becad7791a3fc23dbd360e90469c2b1ce5" Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.243110 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ff86" Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.281377 4861 scope.go:117] "RemoveContainer" containerID="b706770856a4c708f2ce9447d6b78569934de70767dd1f2b32dd45596dee10d8" Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.284438 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.298632 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ff86"] Mar 10 19:30:13 crc kubenswrapper[4861]: I0310 19:30:13.315172 4861 scope.go:117] "RemoveContainer" containerID="17cd08c74e3c7765f2bf7c129b3b18b05abb5bafa559f877ef70bb58bf61cda5" Mar 10 19:30:14 crc kubenswrapper[4861]: I0310 19:30:14.973447 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74905e72-7705-4b35-94da-8b57687ddd91" path="/var/lib/kubelet/pods/74905e72-7705-4b35-94da-8b57687ddd91/volumes" Mar 10 19:30:22 crc kubenswrapper[4861]: I0310 19:30:22.962493 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:30:22 crc kubenswrapper[4861]: E0310 19:30:22.969049 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:30:35 crc kubenswrapper[4861]: I0310 19:30:35.958690 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:30:35 crc kubenswrapper[4861]: E0310 19:30:35.959628 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:30:50 crc kubenswrapper[4861]: I0310 19:30:50.958574 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:30:50 crc kubenswrapper[4861]: E0310 19:30:50.959641 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:30:54 crc kubenswrapper[4861]: I0310 19:30:54.069000 4861 scope.go:117] "RemoveContainer" containerID="61ca3e1060d6dfb28752f8138422c83504a8608695be1459552183b0e5aa08e4" Mar 10 19:30:54 crc kubenswrapper[4861]: I0310 19:30:54.104578 4861 scope.go:117] "RemoveContainer" containerID="12b926404431693794fb2c81c0dbddf60eefa319737c22f7478b418122b8decc" Mar 10 19:31:05 crc kubenswrapper[4861]: I0310 19:31:05.957732 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:31:05 crc kubenswrapper[4861]: E0310 19:31:05.958610 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:31:19 crc kubenswrapper[4861]: I0310 19:31:19.958605 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:31:19 crc kubenswrapper[4861]: E0310 19:31:19.960236 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:31:31 crc kubenswrapper[4861]: I0310 19:31:31.958268 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:31:31 crc kubenswrapper[4861]: E0310 19:31:31.959286 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:31:43 crc kubenswrapper[4861]: I0310 19:31:43.958929 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:31:43 crc kubenswrapper[4861]: E0310 19:31:43.960301 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:31:56 crc kubenswrapper[4861]: I0310 19:31:56.967357 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:31:56 crc kubenswrapper[4861]: E0310 19:31:56.968408 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.166895 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552852-k74lk"] Mar 10 19:32:00 crc kubenswrapper[4861]: E0310 19:32:00.167581 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1214934e-2f2b-4a1e-be24-448ed78bbb1a" containerName="oc" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.167602 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1214934e-2f2b-4a1e-be24-448ed78bbb1a" containerName="oc" Mar 10 19:32:00 crc kubenswrapper[4861]: E0310 19:32:00.167629 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="registry-server" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.167641 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="registry-server" Mar 10 19:32:00 crc kubenswrapper[4861]: E0310 19:32:00.167668 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="extract-utilities" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.167682 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="extract-utilities" Mar 10 19:32:00 crc kubenswrapper[4861]: E0310 19:32:00.167697 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab095e33-d713-4053-a55a-e7da9a9d1587" containerName="collect-profiles" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.167737 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab095e33-d713-4053-a55a-e7da9a9d1587" containerName="collect-profiles" Mar 10 19:32:00 crc kubenswrapper[4861]: E0310 19:32:00.167769 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="extract-content" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.167781 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="extract-content" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.168011 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1214934e-2f2b-4a1e-be24-448ed78bbb1a" containerName="oc" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.168073 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="74905e72-7705-4b35-94da-8b57687ddd91" containerName="registry-server" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.168098 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab095e33-d713-4053-a55a-e7da9a9d1587" containerName="collect-profiles" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.168943 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.171292 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.171767 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.172005 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.182549 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552852-k74lk"] Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.288786 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9mc\" (UniqueName: \"kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc\") pod \"auto-csr-approver-29552852-k74lk\" (UID: \"65e1294d-53ac-4c81-bdeb-2ffe10009b37\") " pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.390848 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9mc\" (UniqueName: \"kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc\") pod \"auto-csr-approver-29552852-k74lk\" (UID: \"65e1294d-53ac-4c81-bdeb-2ffe10009b37\") " pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.422844 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9mc\" (UniqueName: \"kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc\") pod \"auto-csr-approver-29552852-k74lk\" (UID: \"65e1294d-53ac-4c81-bdeb-2ffe10009b37\") " pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:00 crc kubenswrapper[4861]: I0310 19:32:00.498739 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:01 crc kubenswrapper[4861]: I0310 19:32:01.017507 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552852-k74lk"] Mar 10 19:32:01 crc kubenswrapper[4861]: W0310 19:32:01.040498 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e1294d_53ac_4c81_bdeb_2ffe10009b37.slice/crio-8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f WatchSource:0}: Error finding container 8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f: Status 404 returned error can't find the container with id 8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f Mar 10 19:32:01 crc kubenswrapper[4861]: I0310 19:32:01.048357 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:32:01 crc kubenswrapper[4861]: I0310 19:32:01.339357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552852-k74lk" event={"ID":"65e1294d-53ac-4c81-bdeb-2ffe10009b37","Type":"ContainerStarted","Data":"8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f"} Mar 10 19:32:03 crc kubenswrapper[4861]: I0310 19:32:03.355780 4861 generic.go:334] "Generic (PLEG): container finished" podID="65e1294d-53ac-4c81-bdeb-2ffe10009b37" containerID="c7b205995f492dd09c36886e508838cbdc83de3afbef246dbaf76e8eac282bdf" exitCode=0 Mar 10 19:32:03 crc kubenswrapper[4861]: I0310 19:32:03.355882 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552852-k74lk" event={"ID":"65e1294d-53ac-4c81-bdeb-2ffe10009b37","Type":"ContainerDied","Data":"c7b205995f492dd09c36886e508838cbdc83de3afbef246dbaf76e8eac282bdf"} Mar 10 19:32:04 crc kubenswrapper[4861]: I0310 19:32:04.753741 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:04 crc kubenswrapper[4861]: I0310 19:32:04.883238 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch9mc\" (UniqueName: \"kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc\") pod \"65e1294d-53ac-4c81-bdeb-2ffe10009b37\" (UID: \"65e1294d-53ac-4c81-bdeb-2ffe10009b37\") " Mar 10 19:32:04 crc kubenswrapper[4861]: I0310 19:32:04.896341 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc" (OuterVolumeSpecName: "kube-api-access-ch9mc") pod "65e1294d-53ac-4c81-bdeb-2ffe10009b37" (UID: "65e1294d-53ac-4c81-bdeb-2ffe10009b37"). InnerVolumeSpecName "kube-api-access-ch9mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:32:04 crc kubenswrapper[4861]: I0310 19:32:04.985861 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch9mc\" (UniqueName: \"kubernetes.io/projected/65e1294d-53ac-4c81-bdeb-2ffe10009b37-kube-api-access-ch9mc\") on node \"crc\" DevicePath \"\"" Mar 10 19:32:05 crc kubenswrapper[4861]: I0310 19:32:05.378203 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552852-k74lk" event={"ID":"65e1294d-53ac-4c81-bdeb-2ffe10009b37","Type":"ContainerDied","Data":"8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f"} Mar 10 19:32:05 crc kubenswrapper[4861]: I0310 19:32:05.378763 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9198bcb00f5da954a9a4d4f3e78844553df3fbfdad156cc9a6c5badf15573f" Mar 10 19:32:05 crc kubenswrapper[4861]: I0310 19:32:05.378294 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552852-k74lk" Mar 10 19:32:05 crc kubenswrapper[4861]: I0310 19:32:05.853613 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552846-dspw8"] Mar 10 19:32:05 crc kubenswrapper[4861]: I0310 19:32:05.859938 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552846-dspw8"] Mar 10 19:32:06 crc kubenswrapper[4861]: I0310 19:32:06.974833 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942e4526-20e6-4232-849a-20a982e545ab" path="/var/lib/kubelet/pods/942e4526-20e6-4232-849a-20a982e545ab/volumes" Mar 10 19:32:10 crc kubenswrapper[4861]: I0310 19:32:10.958023 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:32:10 crc kubenswrapper[4861]: E0310 19:32:10.958974 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:32:22 crc kubenswrapper[4861]: I0310 19:32:22.959291 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:32:22 crc kubenswrapper[4861]: E0310 19:32:22.960465 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:32:34 crc kubenswrapper[4861]: I0310 19:32:34.958557 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:32:34 crc kubenswrapper[4861]: E0310 19:32:34.959543 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:32:48 crc kubenswrapper[4861]: I0310 19:32:48.958579 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:32:48 crc kubenswrapper[4861]: E0310 19:32:48.959536 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:32:54 crc kubenswrapper[4861]: I0310 19:32:54.248513 4861 scope.go:117] "RemoveContainer" containerID="a006e0f5eff4008c7463e9b545eb4b2273ed323728102b2a34233eb444ee26bb" Mar 10 19:33:01 crc kubenswrapper[4861]: I0310 19:33:01.958548 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:33:01 crc kubenswrapper[4861]: E0310 19:33:01.959311 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:33:13 crc kubenswrapper[4861]: I0310 19:33:13.958786 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:33:13 crc kubenswrapper[4861]: E0310 19:33:13.959808 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:33:26 crc kubenswrapper[4861]: I0310 19:33:26.966851 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:33:26 crc kubenswrapper[4861]: E0310 19:33:26.967807 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:33:41 crc kubenswrapper[4861]: I0310 19:33:41.958840 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:33:41 crc kubenswrapper[4861]: E0310 19:33:41.959917 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:33:52 crc kubenswrapper[4861]: I0310 19:33:52.960166 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:33:52 crc kubenswrapper[4861]: E0310 19:33:52.961207 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.169253 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552854-v8cc5"] Mar 10 19:34:00 crc kubenswrapper[4861]: E0310 19:34:00.170270 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e1294d-53ac-4c81-bdeb-2ffe10009b37" containerName="oc" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.170294 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e1294d-53ac-4c81-bdeb-2ffe10009b37" containerName="oc" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.170524 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e1294d-53ac-4c81-bdeb-2ffe10009b37" containerName="oc" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.171326 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.174149 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.175184 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.175532 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.180650 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552854-v8cc5"] Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.182083 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqzb\" (UniqueName: \"kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb\") pod \"auto-csr-approver-29552854-v8cc5\" (UID: \"0453adaa-794c-4700-b8eb-144ec393fe00\") " pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.282888 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wqzb\" (UniqueName: \"kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb\") pod \"auto-csr-approver-29552854-v8cc5\" (UID: \"0453adaa-794c-4700-b8eb-144ec393fe00\") " pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.315346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wqzb\" (UniqueName: \"kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb\") pod \"auto-csr-approver-29552854-v8cc5\" (UID: \"0453adaa-794c-4700-b8eb-144ec393fe00\") " pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.534017 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:00 crc kubenswrapper[4861]: I0310 19:34:00.844008 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552854-v8cc5"] Mar 10 19:34:01 crc kubenswrapper[4861]: I0310 19:34:01.510999 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" event={"ID":"0453adaa-794c-4700-b8eb-144ec393fe00","Type":"ContainerStarted","Data":"95ff82c02c4fed1e4874d018a3f4adee4b9ab8cc84431333f3f1865fe5bc5bb8"} Mar 10 19:34:02 crc kubenswrapper[4861]: I0310 19:34:02.522438 4861 generic.go:334] "Generic (PLEG): container finished" podID="0453adaa-794c-4700-b8eb-144ec393fe00" containerID="d0c75f1c113bc69335e4db8a6331d2f2cadfbfb64eb2648e94106de23fa61915" exitCode=0 Mar 10 19:34:02 crc kubenswrapper[4861]: I0310 19:34:02.522561 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" event={"ID":"0453adaa-794c-4700-b8eb-144ec393fe00","Type":"ContainerDied","Data":"d0c75f1c113bc69335e4db8a6331d2f2cadfbfb64eb2648e94106de23fa61915"} Mar 10 19:34:03 crc kubenswrapper[4861]: I0310 19:34:03.909551 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.049424 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wqzb\" (UniqueName: \"kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb\") pod \"0453adaa-794c-4700-b8eb-144ec393fe00\" (UID: \"0453adaa-794c-4700-b8eb-144ec393fe00\") " Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.062123 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb" (OuterVolumeSpecName: "kube-api-access-6wqzb") pod "0453adaa-794c-4700-b8eb-144ec393fe00" (UID: "0453adaa-794c-4700-b8eb-144ec393fe00"). InnerVolumeSpecName "kube-api-access-6wqzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.151887 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wqzb\" (UniqueName: \"kubernetes.io/projected/0453adaa-794c-4700-b8eb-144ec393fe00-kube-api-access-6wqzb\") on node \"crc\" DevicePath \"\"" Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.545784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" event={"ID":"0453adaa-794c-4700-b8eb-144ec393fe00","Type":"ContainerDied","Data":"95ff82c02c4fed1e4874d018a3f4adee4b9ab8cc84431333f3f1865fe5bc5bb8"} Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.545843 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ff82c02c4fed1e4874d018a3f4adee4b9ab8cc84431333f3f1865fe5bc5bb8" Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.545926 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552854-v8cc5" Mar 10 19:34:04 crc kubenswrapper[4861]: I0310 19:34:04.999345 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552848-2hj89"] Mar 10 19:34:05 crc kubenswrapper[4861]: I0310 19:34:05.009860 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552848-2hj89"] Mar 10 19:34:06 crc kubenswrapper[4861]: I0310 19:34:06.969812 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f01252a-e9d2-4f92-a489-60509ea2cbfb" path="/var/lib/kubelet/pods/6f01252a-e9d2-4f92-a489-60509ea2cbfb/volumes" Mar 10 19:34:07 crc kubenswrapper[4861]: I0310 19:34:07.959117 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:34:07 crc kubenswrapper[4861]: E0310 19:34:07.959530 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:34:20 crc kubenswrapper[4861]: I0310 19:34:20.958233 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:34:20 crc kubenswrapper[4861]: E0310 19:34:20.959158 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.650841 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 19:34:22 crc kubenswrapper[4861]: E0310 19:34:22.651160 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0453adaa-794c-4700-b8eb-144ec393fe00" containerName="oc" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.651175 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0453adaa-794c-4700-b8eb-144ec393fe00" containerName="oc" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.651347 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0453adaa-794c-4700-b8eb-144ec393fe00" containerName="oc" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.652521 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.674545 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.739467 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.739559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.739641 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7r8\" (UniqueName: \"kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.841321 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.841421 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7r8\" (UniqueName: \"kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.841462 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.841990 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.842267 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.864245 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7r8\" (UniqueName: \"kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8\") pod \"redhat-operators-9fhh8\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:22 crc kubenswrapper[4861]: I0310 19:34:22.979038 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:23 crc kubenswrapper[4861]: I0310 19:34:23.454472 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 19:34:23 crc kubenswrapper[4861]: I0310 19:34:23.725124 4861 generic.go:334] "Generic (PLEG): container finished" podID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerID="be23695f9c2c0fdf8937e1c26a55dc6a6f1cd18510e84be0293688fe726ea67d" exitCode=0 Mar 10 19:34:23 crc kubenswrapper[4861]: I0310 19:34:23.725183 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerDied","Data":"be23695f9c2c0fdf8937e1c26a55dc6a6f1cd18510e84be0293688fe726ea67d"} Mar 10 19:34:23 crc kubenswrapper[4861]: I0310 19:34:23.725250 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerStarted","Data":"c0fa3fc3c82323f6fc71e47d7583aa9fe298d6b9413530a13391b455164e97c7"} Mar 10 19:34:31 crc kubenswrapper[4861]: I0310 19:34:31.788163 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerStarted","Data":"62eeef287ea1d5338f9a36786acaa07e630440104b486ae5e45ee40de3434cc5"} Mar 10 19:34:32 crc kubenswrapper[4861]: I0310 19:34:32.802141 4861 generic.go:334] "Generic (PLEG): container finished" podID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerID="62eeef287ea1d5338f9a36786acaa07e630440104b486ae5e45ee40de3434cc5" exitCode=0 Mar 10 19:34:32 crc kubenswrapper[4861]: I0310 19:34:32.802207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerDied","Data":"62eeef287ea1d5338f9a36786acaa07e630440104b486ae5e45ee40de3434cc5"} Mar 10 19:34:33 crc kubenswrapper[4861]: I0310 19:34:33.815160 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerStarted","Data":"59f908c36b4314496d8c9fae7f59e7ef2bc005d479b2c6ed9bfc27e8aa325dec"} Mar 10 19:34:33 crc kubenswrapper[4861]: I0310 19:34:33.847961 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9fhh8" podStartSLOduration=2.268317735 podStartE2EDuration="11.84793741s" podCreationTimestamp="2026-03-10 19:34:22 +0000 UTC" firstStartedPulling="2026-03-10 19:34:23.726325183 +0000 UTC m=+2807.489761143" lastFinishedPulling="2026-03-10 19:34:33.305944828 +0000 UTC m=+2817.069380818" observedRunningTime="2026-03-10 19:34:33.844744055 +0000 UTC m=+2817.608180075" watchObservedRunningTime="2026-03-10 19:34:33.84793741 +0000 UTC m=+2817.611373400" Mar 10 19:34:35 crc kubenswrapper[4861]: I0310 19:34:35.958544 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:34:36 crc kubenswrapper[4861]: I0310 19:34:36.864997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089"} Mar 10 19:34:42 crc kubenswrapper[4861]: I0310 19:34:42.979856 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:42 crc kubenswrapper[4861]: I0310 19:34:42.980462 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:43 crc kubenswrapper[4861]: I0310 19:34:43.053144 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.001622 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.116125 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.172614 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.173014 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d6bp2" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" containerID="cri-o://23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" gracePeriod=2 Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.936264 4861 generic.go:334] "Generic (PLEG): container finished" podID="d044ee30-51eb-432a-8d77-dd87673bd190" containerID="23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" exitCode=0 Mar 10 19:34:44 crc kubenswrapper[4861]: I0310 19:34:44.936455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerDied","Data":"23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b"} Mar 10 19:34:45 crc kubenswrapper[4861]: E0310 19:34:45.783884 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b is running failed: container process not found" containerID="23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 19:34:45 crc kubenswrapper[4861]: E0310 19:34:45.784535 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b is running failed: container process not found" containerID="23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 19:34:45 crc kubenswrapper[4861]: E0310 19:34:45.784869 4861 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b is running failed: container process not found" containerID="23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" cmd=["grpc_health_probe","-addr=:50051"] Mar 10 19:34:45 crc kubenswrapper[4861]: E0310 19:34:45.784907 4861 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-d6bp2" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.626038 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.735594 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content\") pod \"d044ee30-51eb-432a-8d77-dd87673bd190\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.735636 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kz4g\" (UniqueName: \"kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g\") pod \"d044ee30-51eb-432a-8d77-dd87673bd190\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.735740 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities\") pod \"d044ee30-51eb-432a-8d77-dd87673bd190\" (UID: \"d044ee30-51eb-432a-8d77-dd87673bd190\") " Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.736387 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities" (OuterVolumeSpecName: "utilities") pod "d044ee30-51eb-432a-8d77-dd87673bd190" (UID: "d044ee30-51eb-432a-8d77-dd87673bd190"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.751972 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g" (OuterVolumeSpecName: "kube-api-access-7kz4g") pod "d044ee30-51eb-432a-8d77-dd87673bd190" (UID: "d044ee30-51eb-432a-8d77-dd87673bd190"). InnerVolumeSpecName "kube-api-access-7kz4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.837732 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.837777 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kz4g\" (UniqueName: \"kubernetes.io/projected/d044ee30-51eb-432a-8d77-dd87673bd190-kube-api-access-7kz4g\") on node \"crc\" DevicePath \"\"" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.849596 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d044ee30-51eb-432a-8d77-dd87673bd190" (UID: "d044ee30-51eb-432a-8d77-dd87673bd190"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.939355 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d044ee30-51eb-432a-8d77-dd87673bd190-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.952701 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d6bp2" event={"ID":"d044ee30-51eb-432a-8d77-dd87673bd190","Type":"ContainerDied","Data":"95df7d91b8f2c169bbfa51fc9d8f7681970030d45e04583ea6dcc029141e6ce8"} Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.952786 4861 scope.go:117] "RemoveContainer" containerID="23df8f3842c700ca27163ecfb7a047d43867f714ca4ecfd13a1a5f84d3fd567b" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.952829 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d6bp2" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.986833 4861 scope.go:117] "RemoveContainer" containerID="1389b8cbba33ad8746f9e1f780b0efff75d017e52bd62d630b34d8611edf6900" Mar 10 19:34:46 crc kubenswrapper[4861]: I0310 19:34:46.995211 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 19:34:47 crc kubenswrapper[4861]: I0310 19:34:47.002801 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d6bp2"] Mar 10 19:34:47 crc kubenswrapper[4861]: I0310 19:34:47.024850 4861 scope.go:117] "RemoveContainer" containerID="d97b2be5a378a3e0c1ca1eb5f1cfe4e5b25c48432f194c3fea426cfdbcae2edb" Mar 10 19:34:48 crc kubenswrapper[4861]: I0310 19:34:48.975225 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" path="/var/lib/kubelet/pods/d044ee30-51eb-432a-8d77-dd87673bd190/volumes" Mar 10 19:34:54 crc kubenswrapper[4861]: I0310 19:34:54.369784 4861 scope.go:117] "RemoveContainer" containerID="dd72b4a742e08b86d86d2e0126a7f7e2f1d25cd5be1132983444e2033feb53df" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.164547 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552856-vxtbd"] Mar 10 19:36:00 crc kubenswrapper[4861]: E0310 19:36:00.167472 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="extract-utilities" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.167541 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="extract-utilities" Mar 10 19:36:00 crc kubenswrapper[4861]: E0310 19:36:00.167561 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="extract-content" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.167579 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="extract-content" Mar 10 19:36:00 crc kubenswrapper[4861]: E0310 19:36:00.167625 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.167646 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.168001 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d044ee30-51eb-432a-8d77-dd87673bd190" containerName="registry-server" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.168861 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.174399 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.175097 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.175190 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.177556 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552856-vxtbd"] Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.253484 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdkg\" (UniqueName: \"kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg\") pod \"auto-csr-approver-29552856-vxtbd\" (UID: \"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7\") " pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.355122 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdkg\" (UniqueName: \"kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg\") pod \"auto-csr-approver-29552856-vxtbd\" (UID: \"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7\") " pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.389170 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdkg\" (UniqueName: \"kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg\") pod \"auto-csr-approver-29552856-vxtbd\" (UID: \"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7\") " pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:00 crc kubenswrapper[4861]: I0310 19:36:00.530845 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:01 crc kubenswrapper[4861]: I0310 19:36:01.038818 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552856-vxtbd"] Mar 10 19:36:01 crc kubenswrapper[4861]: I0310 19:36:01.625942 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" event={"ID":"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7","Type":"ContainerStarted","Data":"2fd6f1defcdbcbdd2f8eaa45f1a589bc47b2c9181b7d9be4c4b7741f0e49cd21"} Mar 10 19:36:02 crc kubenswrapper[4861]: I0310 19:36:02.631623 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" event={"ID":"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7","Type":"ContainerStarted","Data":"084910547fce05ca98abcceae04edd76b3367099053ae59fa2f607449eea485f"} Mar 10 19:36:02 crc kubenswrapper[4861]: I0310 19:36:02.657889 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" podStartSLOduration=1.59209881 podStartE2EDuration="2.657872167s" podCreationTimestamp="2026-03-10 19:36:00 +0000 UTC" firstStartedPulling="2026-03-10 19:36:01.054481346 +0000 UTC m=+2904.817917316" lastFinishedPulling="2026-03-10 19:36:02.120254683 +0000 UTC m=+2905.883690673" observedRunningTime="2026-03-10 19:36:02.652493494 +0000 UTC m=+2906.415929454" watchObservedRunningTime="2026-03-10 19:36:02.657872167 +0000 UTC m=+2906.421308127" Mar 10 19:36:03 crc kubenswrapper[4861]: I0310 19:36:03.643532 4861 generic.go:334] "Generic (PLEG): container finished" podID="ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" containerID="084910547fce05ca98abcceae04edd76b3367099053ae59fa2f607449eea485f" exitCode=0 Mar 10 19:36:03 crc kubenswrapper[4861]: I0310 19:36:03.643600 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" event={"ID":"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7","Type":"ContainerDied","Data":"084910547fce05ca98abcceae04edd76b3367099053ae59fa2f607449eea485f"} Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.058114 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.141780 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdkg\" (UniqueName: \"kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg\") pod \"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7\" (UID: \"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7\") " Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.149956 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg" (OuterVolumeSpecName: "kube-api-access-xwdkg") pod "ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" (UID: "ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7"). InnerVolumeSpecName "kube-api-access-xwdkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.243183 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdkg\" (UniqueName: \"kubernetes.io/projected/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7-kube-api-access-xwdkg\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.666493 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" event={"ID":"ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7","Type":"ContainerDied","Data":"2fd6f1defcdbcbdd2f8eaa45f1a589bc47b2c9181b7d9be4c4b7741f0e49cd21"} Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.666557 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd6f1defcdbcbdd2f8eaa45f1a589bc47b2c9181b7d9be4c4b7741f0e49cd21" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.666558 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552856-vxtbd" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.751040 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552850-k4hm4"] Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.762760 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552850-k4hm4"] Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.958822 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:05 crc kubenswrapper[4861]: E0310 19:36:05.959107 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" containerName="oc" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.959124 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" containerName="oc" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.959257 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" containerName="oc" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.965315 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:05 crc kubenswrapper[4861]: I0310 19:36:05.977562 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.155855 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6lv\" (UniqueName: \"kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.155926 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.156209 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.257363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.257697 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6lv\" (UniqueName: \"kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.257759 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.257984 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.258266 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.288190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6lv\" (UniqueName: \"kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv\") pod \"certified-operators-flkk4\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.320354 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.636256 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:06 crc kubenswrapper[4861]: W0310 19:36:06.640875 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e549e4_c14b_44f1_af46_731c86023b56.slice/crio-d1e1540431ea6c18932863149c6aa73304af529de9414c3c30e41a028d1b3c73 WatchSource:0}: Error finding container d1e1540431ea6c18932863149c6aa73304af529de9414c3c30e41a028d1b3c73: Status 404 returned error can't find the container with id d1e1540431ea6c18932863149c6aa73304af529de9414c3c30e41a028d1b3c73 Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.678489 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerStarted","Data":"d1e1540431ea6c18932863149c6aa73304af529de9414c3c30e41a028d1b3c73"} Mar 10 19:36:06 crc kubenswrapper[4861]: I0310 19:36:06.967541 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1214934e-2f2b-4a1e-be24-448ed78bbb1a" path="/var/lib/kubelet/pods/1214934e-2f2b-4a1e-be24-448ed78bbb1a/volumes" Mar 10 19:36:07 crc kubenswrapper[4861]: I0310 19:36:07.690109 4861 generic.go:334] "Generic (PLEG): container finished" podID="86e549e4-c14b-44f1-af46-731c86023b56" containerID="8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d" exitCode=0 Mar 10 19:36:07 crc kubenswrapper[4861]: I0310 19:36:07.690157 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerDied","Data":"8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d"} Mar 10 19:36:08 crc kubenswrapper[4861]: I0310 19:36:08.704460 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerStarted","Data":"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e"} Mar 10 19:36:09 crc kubenswrapper[4861]: I0310 19:36:09.722787 4861 generic.go:334] "Generic (PLEG): container finished" podID="86e549e4-c14b-44f1-af46-731c86023b56" containerID="b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e" exitCode=0 Mar 10 19:36:09 crc kubenswrapper[4861]: I0310 19:36:09.722876 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerDied","Data":"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e"} Mar 10 19:36:10 crc kubenswrapper[4861]: I0310 19:36:10.734664 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerStarted","Data":"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e"} Mar 10 19:36:10 crc kubenswrapper[4861]: I0310 19:36:10.768108 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-flkk4" podStartSLOduration=3.290912798 podStartE2EDuration="5.768085541s" podCreationTimestamp="2026-03-10 19:36:05 +0000 UTC" firstStartedPulling="2026-03-10 19:36:07.692100713 +0000 UTC m=+2911.455536713" lastFinishedPulling="2026-03-10 19:36:10.169273466 +0000 UTC m=+2913.932709456" observedRunningTime="2026-03-10 19:36:10.76165232 +0000 UTC m=+2914.525088290" watchObservedRunningTime="2026-03-10 19:36:10.768085541 +0000 UTC m=+2914.531521531" Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.957508 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.959535 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.984447 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.992368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.992428 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:12 crc kubenswrapper[4861]: I0310 19:36:12.992461 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhrb\" (UniqueName: \"kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.093207 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.093286 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.093322 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhrb\" (UniqueName: \"kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.093838 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.093860 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.117030 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhrb\" (UniqueName: \"kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb\") pod \"redhat-marketplace-7zp7b\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.284696 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:13 crc kubenswrapper[4861]: I0310 19:36:13.849994 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:13 crc kubenswrapper[4861]: W0310 19:36:13.854514 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0b3be4_9cfd_4b8c_b6c4_687b88bb2c49.slice/crio-4041bccd03562070e445ec65cbe87512f4e4c8ce2960675a33ccf4d58917d1eb WatchSource:0}: Error finding container 4041bccd03562070e445ec65cbe87512f4e4c8ce2960675a33ccf4d58917d1eb: Status 404 returned error can't find the container with id 4041bccd03562070e445ec65cbe87512f4e4c8ce2960675a33ccf4d58917d1eb Mar 10 19:36:14 crc kubenswrapper[4861]: I0310 19:36:14.794603 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerID="c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509" exitCode=0 Mar 10 19:36:14 crc kubenswrapper[4861]: I0310 19:36:14.795045 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerDied","Data":"c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509"} Mar 10 19:36:14 crc kubenswrapper[4861]: I0310 19:36:14.795088 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerStarted","Data":"4041bccd03562070e445ec65cbe87512f4e4c8ce2960675a33ccf4d58917d1eb"} Mar 10 19:36:15 crc kubenswrapper[4861]: I0310 19:36:15.809354 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerID="40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c" exitCode=0 Mar 10 19:36:15 crc kubenswrapper[4861]: I0310 19:36:15.809415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerDied","Data":"40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c"} Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.344226 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.345002 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.419926 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.835673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerStarted","Data":"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847"} Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.874081 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7zp7b" podStartSLOduration=3.243575309 podStartE2EDuration="4.874048803s" podCreationTimestamp="2026-03-10 19:36:12 +0000 UTC" firstStartedPulling="2026-03-10 19:36:14.798393157 +0000 UTC m=+2918.561829147" lastFinishedPulling="2026-03-10 19:36:16.428866671 +0000 UTC m=+2920.192302641" observedRunningTime="2026-03-10 19:36:16.867079356 +0000 UTC m=+2920.630515336" watchObservedRunningTime="2026-03-10 19:36:16.874048803 +0000 UTC m=+2920.637484793" Mar 10 19:36:16 crc kubenswrapper[4861]: I0310 19:36:16.920246 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:18 crc kubenswrapper[4861]: I0310 19:36:18.716034 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:19 crc kubenswrapper[4861]: I0310 19:36:19.857623 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-flkk4" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="registry-server" containerID="cri-o://ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e" gracePeriod=2 Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.350997 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.505574 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content\") pod \"86e549e4-c14b-44f1-af46-731c86023b56\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.505696 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf6lv\" (UniqueName: \"kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv\") pod \"86e549e4-c14b-44f1-af46-731c86023b56\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.505869 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities\") pod \"86e549e4-c14b-44f1-af46-731c86023b56\" (UID: \"86e549e4-c14b-44f1-af46-731c86023b56\") " Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.507464 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities" (OuterVolumeSpecName: "utilities") pod "86e549e4-c14b-44f1-af46-731c86023b56" (UID: "86e549e4-c14b-44f1-af46-731c86023b56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.519839 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv" (OuterVolumeSpecName: "kube-api-access-tf6lv") pod "86e549e4-c14b-44f1-af46-731c86023b56" (UID: "86e549e4-c14b-44f1-af46-731c86023b56"). InnerVolumeSpecName "kube-api-access-tf6lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.606506 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86e549e4-c14b-44f1-af46-731c86023b56" (UID: "86e549e4-c14b-44f1-af46-731c86023b56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.607652 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.607702 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e549e4-c14b-44f1-af46-731c86023b56-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.607877 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf6lv\" (UniqueName: \"kubernetes.io/projected/86e549e4-c14b-44f1-af46-731c86023b56-kube-api-access-tf6lv\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.869071 4861 generic.go:334] "Generic (PLEG): container finished" podID="86e549e4-c14b-44f1-af46-731c86023b56" containerID="ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e" exitCode=0 Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.869133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerDied","Data":"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e"} Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.869172 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flkk4" event={"ID":"86e549e4-c14b-44f1-af46-731c86023b56","Type":"ContainerDied","Data":"d1e1540431ea6c18932863149c6aa73304af529de9414c3c30e41a028d1b3c73"} Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.869199 4861 scope.go:117] "RemoveContainer" containerID="ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.869372 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flkk4" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.919194 4861 scope.go:117] "RemoveContainer" containerID="b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.919389 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.926768 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-flkk4"] Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.953238 4861 scope.go:117] "RemoveContainer" containerID="8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.976896 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e549e4-c14b-44f1-af46-731c86023b56" path="/var/lib/kubelet/pods/86e549e4-c14b-44f1-af46-731c86023b56/volumes" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.987464 4861 scope.go:117] "RemoveContainer" containerID="ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e" Mar 10 19:36:20 crc kubenswrapper[4861]: E0310 19:36:20.988267 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e\": container with ID starting with ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e not found: ID does not exist" containerID="ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.988330 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e"} err="failed to get container status \"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e\": rpc error: code = NotFound desc = could not find container \"ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e\": container with ID starting with ff01a289362f44cdd328975850078943ba212c4efc7f461e30fe77413e9e840e not found: ID does not exist" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.988372 4861 scope.go:117] "RemoveContainer" containerID="b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e" Mar 10 19:36:20 crc kubenswrapper[4861]: E0310 19:36:20.988854 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e\": container with ID starting with b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e not found: ID does not exist" containerID="b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.988890 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e"} err="failed to get container status \"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e\": rpc error: code = NotFound desc = could not find container \"b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e\": container with ID starting with b62502efd8d98fc1dfcf594a70495d23dccd0d472797d932fd74ad78052a460e not found: ID does not exist" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.988913 4861 scope.go:117] "RemoveContainer" containerID="8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d" Mar 10 19:36:20 crc kubenswrapper[4861]: E0310 19:36:20.989194 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d\": container with ID starting with 8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d not found: ID does not exist" containerID="8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d" Mar 10 19:36:20 crc kubenswrapper[4861]: I0310 19:36:20.989224 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d"} err="failed to get container status \"8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d\": rpc error: code = NotFound desc = could not find container \"8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d\": container with ID starting with 8db0605dda430b63b31e42f7de4a85dce8cea513fe02b0b3725cd5ab17f18c0d not found: ID does not exist" Mar 10 19:36:23 crc kubenswrapper[4861]: I0310 19:36:23.285256 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:23 crc kubenswrapper[4861]: I0310 19:36:23.285892 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:23 crc kubenswrapper[4861]: I0310 19:36:23.361836 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:23 crc kubenswrapper[4861]: I0310 19:36:23.972699 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:24 crc kubenswrapper[4861]: I0310 19:36:24.718494 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:25 crc kubenswrapper[4861]: I0310 19:36:25.919068 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7zp7b" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="registry-server" containerID="cri-o://c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847" gracePeriod=2 Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.414055 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.512505 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content\") pod \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.512612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrhrb\" (UniqueName: \"kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb\") pod \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.512838 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities\") pod \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\" (UID: \"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49\") " Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.514184 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities" (OuterVolumeSpecName: "utilities") pod "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" (UID: "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.519557 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb" (OuterVolumeSpecName: "kube-api-access-nrhrb") pod "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" (UID: "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49"). InnerVolumeSpecName "kube-api-access-nrhrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.553625 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" (UID: "3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.614291 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.614374 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.614397 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrhrb\" (UniqueName: \"kubernetes.io/projected/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49-kube-api-access-nrhrb\") on node \"crc\" DevicePath \"\"" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.937086 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerID="c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847" exitCode=0 Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.937162 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerDied","Data":"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847"} Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.937177 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zp7b" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.937212 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zp7b" event={"ID":"3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49","Type":"ContainerDied","Data":"4041bccd03562070e445ec65cbe87512f4e4c8ce2960675a33ccf4d58917d1eb"} Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.937244 4861 scope.go:117] "RemoveContainer" containerID="c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847" Mar 10 19:36:26 crc kubenswrapper[4861]: I0310 19:36:26.981581 4861 scope.go:117] "RemoveContainer" containerID="40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.007772 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.019264 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zp7b"] Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.031699 4861 scope.go:117] "RemoveContainer" containerID="c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.069583 4861 scope.go:117] "RemoveContainer" containerID="c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847" Mar 10 19:36:27 crc kubenswrapper[4861]: E0310 19:36:27.070155 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847\": container with ID starting with c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847 not found: ID does not exist" containerID="c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.070214 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847"} err="failed to get container status \"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847\": rpc error: code = NotFound desc = could not find container \"c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847\": container with ID starting with c2c2d90724dfc59a4b2e3e141ac1cfa5f3c615febbce1210e9f1f48dd9e3c847 not found: ID does not exist" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.070260 4861 scope.go:117] "RemoveContainer" containerID="40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c" Mar 10 19:36:27 crc kubenswrapper[4861]: E0310 19:36:27.070691 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c\": container with ID starting with 40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c not found: ID does not exist" containerID="40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.070817 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c"} err="failed to get container status \"40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c\": rpc error: code = NotFound desc = could not find container \"40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c\": container with ID starting with 40da735267075638483b7a26e0b4784e5689cede454e63925e85d0d9ad5acf9c not found: ID does not exist" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.070856 4861 scope.go:117] "RemoveContainer" containerID="c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509" Mar 10 19:36:27 crc kubenswrapper[4861]: E0310 19:36:27.072111 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509\": container with ID starting with c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509 not found: ID does not exist" containerID="c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509" Mar 10 19:36:27 crc kubenswrapper[4861]: I0310 19:36:27.072176 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509"} err="failed to get container status \"c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509\": rpc error: code = NotFound desc = could not find container \"c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509\": container with ID starting with c96922fc08fc2b71c8a9b87d8a69fe262a0ba5dd5b0004b7018fb78c376a9509 not found: ID does not exist" Mar 10 19:36:28 crc kubenswrapper[4861]: I0310 19:36:28.975902 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" path="/var/lib/kubelet/pods/3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49/volumes" Mar 10 19:36:51 crc kubenswrapper[4861]: I0310 19:36:51.991815 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:36:51 crc kubenswrapper[4861]: I0310 19:36:51.993867 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:36:54 crc kubenswrapper[4861]: I0310 19:36:54.497211 4861 scope.go:117] "RemoveContainer" containerID="43393281e71df51937a953c595800fbb10742aee2b0dca908db48bb954d216de" Mar 10 19:37:21 crc kubenswrapper[4861]: I0310 19:37:21.992811 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:37:21 crc kubenswrapper[4861]: I0310 19:37:21.993570 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:37:51 crc kubenswrapper[4861]: I0310 19:37:51.992665 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:37:51 crc kubenswrapper[4861]: I0310 19:37:51.993389 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:37:51 crc kubenswrapper[4861]: I0310 19:37:51.993441 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:37:51 crc kubenswrapper[4861]: I0310 19:37:51.994263 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:37:51 crc kubenswrapper[4861]: I0310 19:37:51.994344 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089" gracePeriod=600 Mar 10 19:37:52 crc kubenswrapper[4861]: I0310 19:37:52.759275 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089" exitCode=0 Mar 10 19:37:52 crc kubenswrapper[4861]: I0310 19:37:52.759359 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089"} Mar 10 19:37:52 crc kubenswrapper[4861]: I0310 19:37:52.760430 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995"} Mar 10 19:37:52 crc kubenswrapper[4861]: I0310 19:37:52.760538 4861 scope.go:117] "RemoveContainer" containerID="fd0a159c2c146400d4580f41b7ac863e4f7fe4e1d0fac7f0b219dc9bba5fbf48" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.159387 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552858-mtrc6"] Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160573 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160596 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160633 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="extract-content" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160646 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="extract-content" Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160661 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="extract-utilities" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160673 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="extract-utilities" Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160694 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="extract-utilities" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160733 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="extract-utilities" Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160750 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="extract-content" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160762 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="extract-content" Mar 10 19:38:00 crc kubenswrapper[4861]: E0310 19:38:00.160782 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.160795 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.161027 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0b3be4-9cfd-4b8c-b6c4-687b88bb2c49" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.161049 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e549e4-c14b-44f1-af46-731c86023b56" containerName="registry-server" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.162263 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.165689 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.167256 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.167686 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.170376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87lv\" (UniqueName: \"kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv\") pod \"auto-csr-approver-29552858-mtrc6\" (UID: \"b0620f16-6cf4-4d06-88a4-48c8821d5113\") " pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.176704 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552858-mtrc6"] Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.272299 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87lv\" (UniqueName: \"kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv\") pod \"auto-csr-approver-29552858-mtrc6\" (UID: \"b0620f16-6cf4-4d06-88a4-48c8821d5113\") " pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.307618 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87lv\" (UniqueName: \"kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv\") pod \"auto-csr-approver-29552858-mtrc6\" (UID: \"b0620f16-6cf4-4d06-88a4-48c8821d5113\") " pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:00 crc kubenswrapper[4861]: I0310 19:38:00.494734 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:01 crc kubenswrapper[4861]: I0310 19:38:01.017467 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552858-mtrc6"] Mar 10 19:38:01 crc kubenswrapper[4861]: I0310 19:38:01.024412 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:38:01 crc kubenswrapper[4861]: I0310 19:38:01.849482 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" event={"ID":"b0620f16-6cf4-4d06-88a4-48c8821d5113","Type":"ContainerStarted","Data":"2ad90813b176f6eb631d51dcafeccc5dbe29ec9564a9b6dda5b668902a5f3f87"} Mar 10 19:38:02 crc kubenswrapper[4861]: I0310 19:38:02.862117 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" event={"ID":"b0620f16-6cf4-4d06-88a4-48c8821d5113","Type":"ContainerStarted","Data":"dce78c005ab09647f63b2b4aac6737f128b5bc6e21e77121fbbe871822090713"} Mar 10 19:38:02 crc kubenswrapper[4861]: I0310 19:38:02.887823 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" podStartSLOduration=1.614773439 podStartE2EDuration="2.887800018s" podCreationTimestamp="2026-03-10 19:38:00 +0000 UTC" firstStartedPulling="2026-03-10 19:38:01.024161872 +0000 UTC m=+3024.787597832" lastFinishedPulling="2026-03-10 19:38:02.297188411 +0000 UTC m=+3026.060624411" observedRunningTime="2026-03-10 19:38:02.878603953 +0000 UTC m=+3026.642039953" watchObservedRunningTime="2026-03-10 19:38:02.887800018 +0000 UTC m=+3026.651236008" Mar 10 19:38:03 crc kubenswrapper[4861]: I0310 19:38:03.873136 4861 generic.go:334] "Generic (PLEG): container finished" podID="b0620f16-6cf4-4d06-88a4-48c8821d5113" containerID="dce78c005ab09647f63b2b4aac6737f128b5bc6e21e77121fbbe871822090713" exitCode=0 Mar 10 19:38:03 crc kubenswrapper[4861]: I0310 19:38:03.873213 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" event={"ID":"b0620f16-6cf4-4d06-88a4-48c8821d5113","Type":"ContainerDied","Data":"dce78c005ab09647f63b2b4aac6737f128b5bc6e21e77121fbbe871822090713"} Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.258587 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.389826 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c87lv\" (UniqueName: \"kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv\") pod \"b0620f16-6cf4-4d06-88a4-48c8821d5113\" (UID: \"b0620f16-6cf4-4d06-88a4-48c8821d5113\") " Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.401579 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv" (OuterVolumeSpecName: "kube-api-access-c87lv") pod "b0620f16-6cf4-4d06-88a4-48c8821d5113" (UID: "b0620f16-6cf4-4d06-88a4-48c8821d5113"). InnerVolumeSpecName "kube-api-access-c87lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.494570 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c87lv\" (UniqueName: \"kubernetes.io/projected/b0620f16-6cf4-4d06-88a4-48c8821d5113-kube-api-access-c87lv\") on node \"crc\" DevicePath \"\"" Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.893687 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" event={"ID":"b0620f16-6cf4-4d06-88a4-48c8821d5113","Type":"ContainerDied","Data":"2ad90813b176f6eb631d51dcafeccc5dbe29ec9564a9b6dda5b668902a5f3f87"} Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.894094 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad90813b176f6eb631d51dcafeccc5dbe29ec9564a9b6dda5b668902a5f3f87" Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.893788 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552858-mtrc6" Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.966283 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552852-k74lk"] Mar 10 19:38:05 crc kubenswrapper[4861]: I0310 19:38:05.972836 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552852-k74lk"] Mar 10 19:38:06 crc kubenswrapper[4861]: I0310 19:38:06.974816 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e1294d-53ac-4c81-bdeb-2ffe10009b37" path="/var/lib/kubelet/pods/65e1294d-53ac-4c81-bdeb-2ffe10009b37/volumes" Mar 10 19:38:54 crc kubenswrapper[4861]: I0310 19:38:54.651457 4861 scope.go:117] "RemoveContainer" containerID="c7b205995f492dd09c36886e508838cbdc83de3afbef246dbaf76e8eac282bdf" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.165802 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552860-qcxxd"] Mar 10 19:40:00 crc kubenswrapper[4861]: E0310 19:40:00.166836 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0620f16-6cf4-4d06-88a4-48c8821d5113" containerName="oc" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.166859 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0620f16-6cf4-4d06-88a4-48c8821d5113" containerName="oc" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.167159 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0620f16-6cf4-4d06-88a4-48c8821d5113" containerName="oc" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.167874 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.172790 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.173385 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.173688 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.182473 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552860-qcxxd"] Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.312671 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65wxs\" (UniqueName: \"kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs\") pod \"auto-csr-approver-29552860-qcxxd\" (UID: \"97e1e534-3448-4c6d-aa2b-88fde11e4996\") " pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.415268 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65wxs\" (UniqueName: \"kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs\") pod \"auto-csr-approver-29552860-qcxxd\" (UID: \"97e1e534-3448-4c6d-aa2b-88fde11e4996\") " pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.450975 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65wxs\" (UniqueName: \"kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs\") pod \"auto-csr-approver-29552860-qcxxd\" (UID: \"97e1e534-3448-4c6d-aa2b-88fde11e4996\") " pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:00 crc kubenswrapper[4861]: I0310 19:40:00.516749 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:01 crc kubenswrapper[4861]: I0310 19:40:01.068843 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552860-qcxxd"] Mar 10 19:40:02 crc kubenswrapper[4861]: I0310 19:40:02.012181 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" event={"ID":"97e1e534-3448-4c6d-aa2b-88fde11e4996","Type":"ContainerStarted","Data":"de209b6b5a9e85f35cb469b1649058fc02c3ad94bc96301271a9ab5149dc912f"} Mar 10 19:40:03 crc kubenswrapper[4861]: I0310 19:40:03.036107 4861 generic.go:334] "Generic (PLEG): container finished" podID="97e1e534-3448-4c6d-aa2b-88fde11e4996" containerID="006e9cae8eb2539480672ac5ccec13f59bb3835e58a2c030d759278a5d7dfeb4" exitCode=0 Mar 10 19:40:03 crc kubenswrapper[4861]: I0310 19:40:03.036232 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" event={"ID":"97e1e534-3448-4c6d-aa2b-88fde11e4996","Type":"ContainerDied","Data":"006e9cae8eb2539480672ac5ccec13f59bb3835e58a2c030d759278a5d7dfeb4"} Mar 10 19:40:04 crc kubenswrapper[4861]: I0310 19:40:04.457075 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:04 crc kubenswrapper[4861]: I0310 19:40:04.596797 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65wxs\" (UniqueName: \"kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs\") pod \"97e1e534-3448-4c6d-aa2b-88fde11e4996\" (UID: \"97e1e534-3448-4c6d-aa2b-88fde11e4996\") " Mar 10 19:40:04 crc kubenswrapper[4861]: I0310 19:40:04.606008 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs" (OuterVolumeSpecName: "kube-api-access-65wxs") pod "97e1e534-3448-4c6d-aa2b-88fde11e4996" (UID: "97e1e534-3448-4c6d-aa2b-88fde11e4996"). InnerVolumeSpecName "kube-api-access-65wxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:40:04 crc kubenswrapper[4861]: I0310 19:40:04.698945 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65wxs\" (UniqueName: \"kubernetes.io/projected/97e1e534-3448-4c6d-aa2b-88fde11e4996-kube-api-access-65wxs\") on node \"crc\" DevicePath \"\"" Mar 10 19:40:05 crc kubenswrapper[4861]: I0310 19:40:05.057603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" event={"ID":"97e1e534-3448-4c6d-aa2b-88fde11e4996","Type":"ContainerDied","Data":"de209b6b5a9e85f35cb469b1649058fc02c3ad94bc96301271a9ab5149dc912f"} Mar 10 19:40:05 crc kubenswrapper[4861]: I0310 19:40:05.057661 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de209b6b5a9e85f35cb469b1649058fc02c3ad94bc96301271a9ab5149dc912f" Mar 10 19:40:05 crc kubenswrapper[4861]: I0310 19:40:05.057685 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552860-qcxxd" Mar 10 19:40:05 crc kubenswrapper[4861]: I0310 19:40:05.546224 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552854-v8cc5"] Mar 10 19:40:05 crc kubenswrapper[4861]: I0310 19:40:05.555361 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552854-v8cc5"] Mar 10 19:40:06 crc kubenswrapper[4861]: I0310 19:40:06.977842 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0453adaa-794c-4700-b8eb-144ec393fe00" path="/var/lib/kubelet/pods/0453adaa-794c-4700-b8eb-144ec393fe00/volumes" Mar 10 19:40:21 crc kubenswrapper[4861]: I0310 19:40:21.992042 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:40:21 crc kubenswrapper[4861]: I0310 19:40:21.992805 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:40:51 crc kubenswrapper[4861]: I0310 19:40:51.992674 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:40:51 crc kubenswrapper[4861]: I0310 19:40:51.993276 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:40:54 crc kubenswrapper[4861]: I0310 19:40:54.765432 4861 scope.go:117] "RemoveContainer" containerID="d0c75f1c113bc69335e4db8a6331d2f2cadfbfb64eb2648e94106de23fa61915" Mar 10 19:41:09 crc kubenswrapper[4861]: I0310 19:41:09.983521 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:09 crc kubenswrapper[4861]: E0310 19:41:09.984652 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e1e534-3448-4c6d-aa2b-88fde11e4996" containerName="oc" Mar 10 19:41:09 crc kubenswrapper[4861]: I0310 19:41:09.984679 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e1e534-3448-4c6d-aa2b-88fde11e4996" containerName="oc" Mar 10 19:41:09 crc kubenswrapper[4861]: I0310 19:41:09.984988 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e1e534-3448-4c6d-aa2b-88fde11e4996" containerName="oc" Mar 10 19:41:09 crc kubenswrapper[4861]: I0310 19:41:09.986622 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:09 crc kubenswrapper[4861]: I0310 19:41:09.999262 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.166677 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.166874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xx69\" (UniqueName: \"kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.167148 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.268623 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.268689 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.268749 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xx69\" (UniqueName: \"kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.269474 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.269583 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.297008 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xx69\" (UniqueName: \"kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69\") pod \"community-operators-nr79c\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.324407 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:10 crc kubenswrapper[4861]: I0310 19:41:10.893269 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:11 crc kubenswrapper[4861]: I0310 19:41:11.681413 4861 generic.go:334] "Generic (PLEG): container finished" podID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerID="d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580" exitCode=0 Mar 10 19:41:11 crc kubenswrapper[4861]: I0310 19:41:11.681591 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerDied","Data":"d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580"} Mar 10 19:41:11 crc kubenswrapper[4861]: I0310 19:41:11.681911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerStarted","Data":"6fa9916543573f31efd4edf88d08476076db9adf70e6fb6faad1e4288dd25815"} Mar 10 19:41:12 crc kubenswrapper[4861]: I0310 19:41:12.690427 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerStarted","Data":"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7"} Mar 10 19:41:13 crc kubenswrapper[4861]: I0310 19:41:13.701387 4861 generic.go:334] "Generic (PLEG): container finished" podID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerID="def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7" exitCode=0 Mar 10 19:41:13 crc kubenswrapper[4861]: I0310 19:41:13.701475 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerDied","Data":"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7"} Mar 10 19:41:14 crc kubenswrapper[4861]: I0310 19:41:14.712336 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerStarted","Data":"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066"} Mar 10 19:41:14 crc kubenswrapper[4861]: I0310 19:41:14.742598 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nr79c" podStartSLOduration=3.320428409 podStartE2EDuration="5.742580496s" podCreationTimestamp="2026-03-10 19:41:09 +0000 UTC" firstStartedPulling="2026-03-10 19:41:11.683973231 +0000 UTC m=+3215.447409221" lastFinishedPulling="2026-03-10 19:41:14.106125308 +0000 UTC m=+3217.869561308" observedRunningTime="2026-03-10 19:41:14.73445612 +0000 UTC m=+3218.497892090" watchObservedRunningTime="2026-03-10 19:41:14.742580496 +0000 UTC m=+3218.506016466" Mar 10 19:41:20 crc kubenswrapper[4861]: I0310 19:41:20.325291 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:20 crc kubenswrapper[4861]: I0310 19:41:20.325940 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:20 crc kubenswrapper[4861]: I0310 19:41:20.405803 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:20 crc kubenswrapper[4861]: I0310 19:41:20.839027 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:20 crc kubenswrapper[4861]: I0310 19:41:20.915675 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:21 crc kubenswrapper[4861]: I0310 19:41:21.992416 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:41:21 crc kubenswrapper[4861]: I0310 19:41:21.992529 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:41:21 crc kubenswrapper[4861]: I0310 19:41:21.992583 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:41:21 crc kubenswrapper[4861]: I0310 19:41:21.993451 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:41:21 crc kubenswrapper[4861]: I0310 19:41:21.993550 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" gracePeriod=600 Mar 10 19:41:22 crc kubenswrapper[4861]: E0310 19:41:22.126862 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:41:22 crc kubenswrapper[4861]: I0310 19:41:22.787467 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995"} Mar 10 19:41:22 crc kubenswrapper[4861]: I0310 19:41:22.787423 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" exitCode=0 Mar 10 19:41:22 crc kubenswrapper[4861]: I0310 19:41:22.787557 4861 scope.go:117] "RemoveContainer" containerID="a236211f320443ddcf90cff3d62a90701e5dd6714a7c6df490aa7277b1ed7089" Mar 10 19:41:22 crc kubenswrapper[4861]: I0310 19:41:22.788192 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:41:22 crc kubenswrapper[4861]: I0310 19:41:22.788304 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nr79c" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="registry-server" containerID="cri-o://546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066" gracePeriod=2 Mar 10 19:41:22 crc kubenswrapper[4861]: E0310 19:41:22.788863 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.217883 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.380897 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xx69\" (UniqueName: \"kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69\") pod \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.380985 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities\") pod \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.381071 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content\") pod \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\" (UID: \"52d07f38-a65d-4a0e-8f44-2b80f6dff700\") " Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.381785 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities" (OuterVolumeSpecName: "utilities") pod "52d07f38-a65d-4a0e-8f44-2b80f6dff700" (UID: "52d07f38-a65d-4a0e-8f44-2b80f6dff700"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.394874 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69" (OuterVolumeSpecName: "kube-api-access-7xx69") pod "52d07f38-a65d-4a0e-8f44-2b80f6dff700" (UID: "52d07f38-a65d-4a0e-8f44-2b80f6dff700"). InnerVolumeSpecName "kube-api-access-7xx69". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.483138 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xx69\" (UniqueName: \"kubernetes.io/projected/52d07f38-a65d-4a0e-8f44-2b80f6dff700-kube-api-access-7xx69\") on node \"crc\" DevicePath \"\"" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.483400 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.754598 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52d07f38-a65d-4a0e-8f44-2b80f6dff700" (UID: "52d07f38-a65d-4a0e-8f44-2b80f6dff700"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.788552 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52d07f38-a65d-4a0e-8f44-2b80f6dff700-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.802893 4861 generic.go:334] "Generic (PLEG): container finished" podID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerID="546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066" exitCode=0 Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.802957 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerDied","Data":"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066"} Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.803040 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nr79c" event={"ID":"52d07f38-a65d-4a0e-8f44-2b80f6dff700","Type":"ContainerDied","Data":"6fa9916543573f31efd4edf88d08476076db9adf70e6fb6faad1e4288dd25815"} Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.803073 4861 scope.go:117] "RemoveContainer" containerID="546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.802979 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nr79c" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.831676 4861 scope.go:117] "RemoveContainer" containerID="def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.853742 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.862702 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nr79c"] Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.875434 4861 scope.go:117] "RemoveContainer" containerID="d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.898511 4861 scope.go:117] "RemoveContainer" containerID="546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066" Mar 10 19:41:23 crc kubenswrapper[4861]: E0310 19:41:23.899102 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066\": container with ID starting with 546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066 not found: ID does not exist" containerID="546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.899141 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066"} err="failed to get container status \"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066\": rpc error: code = NotFound desc = could not find container \"546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066\": container with ID starting with 546bcea379229dfd191d381cdab16a6fc4919cdd7d173d95326497dded748066 not found: ID does not exist" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.899169 4861 scope.go:117] "RemoveContainer" containerID="def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7" Mar 10 19:41:23 crc kubenswrapper[4861]: E0310 19:41:23.899661 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7\": container with ID starting with def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7 not found: ID does not exist" containerID="def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.899738 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7"} err="failed to get container status \"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7\": rpc error: code = NotFound desc = could not find container \"def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7\": container with ID starting with def74ea2bcd075807255ed79e545fea897d9e5f51bde445fede59fc364c569e7 not found: ID does not exist" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.899776 4861 scope.go:117] "RemoveContainer" containerID="d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580" Mar 10 19:41:23 crc kubenswrapper[4861]: E0310 19:41:23.901621 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580\": container with ID starting with d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580 not found: ID does not exist" containerID="d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580" Mar 10 19:41:23 crc kubenswrapper[4861]: I0310 19:41:23.901684 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580"} err="failed to get container status \"d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580\": rpc error: code = NotFound desc = could not find container \"d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580\": container with ID starting with d5d9dc93564cba6e2290586fc667d71bd7b70675aabfe82f85936b6a31fcd580 not found: ID does not exist" Mar 10 19:41:24 crc kubenswrapper[4861]: I0310 19:41:24.978080 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" path="/var/lib/kubelet/pods/52d07f38-a65d-4a0e-8f44-2b80f6dff700/volumes" Mar 10 19:41:36 crc kubenswrapper[4861]: I0310 19:41:36.966967 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:41:36 crc kubenswrapper[4861]: E0310 19:41:36.968111 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:41:51 crc kubenswrapper[4861]: I0310 19:41:51.957878 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:41:51 crc kubenswrapper[4861]: E0310 19:41:51.958882 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.231942 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552862-28np2"] Mar 10 19:42:00 crc kubenswrapper[4861]: E0310 19:42:00.232925 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="extract-content" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.232941 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="extract-content" Mar 10 19:42:00 crc kubenswrapper[4861]: E0310 19:42:00.232963 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="extract-utilities" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.232973 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="extract-utilities" Mar 10 19:42:00 crc kubenswrapper[4861]: E0310 19:42:00.232986 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="registry-server" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.232995 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="registry-server" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.233179 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d07f38-a65d-4a0e-8f44-2b80f6dff700" containerName="registry-server" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.233795 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.235558 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.237522 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.237548 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.251275 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552862-28np2"] Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.335230 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbzb\" (UniqueName: \"kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb\") pod \"auto-csr-approver-29552862-28np2\" (UID: \"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e\") " pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.437194 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbzb\" (UniqueName: \"kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb\") pod \"auto-csr-approver-29552862-28np2\" (UID: \"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e\") " pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.479276 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbzb\" (UniqueName: \"kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb\") pod \"auto-csr-approver-29552862-28np2\" (UID: \"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e\") " pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:00 crc kubenswrapper[4861]: I0310 19:42:00.553859 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:01 crc kubenswrapper[4861]: I0310 19:42:01.066472 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552862-28np2"] Mar 10 19:42:01 crc kubenswrapper[4861]: I0310 19:42:01.231061 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552862-28np2" event={"ID":"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e","Type":"ContainerStarted","Data":"e22afe43b32bde366f4b272805734f4d717286e2bd1c8a5e994a88227c1d82f6"} Mar 10 19:42:03 crc kubenswrapper[4861]: I0310 19:42:03.255268 4861 generic.go:334] "Generic (PLEG): container finished" podID="0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" containerID="f8b5b75a90c3282cba2d63ad4a4db9a66108a70308efd9f37d5cc60c88e00d5a" exitCode=0 Mar 10 19:42:03 crc kubenswrapper[4861]: I0310 19:42:03.255347 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552862-28np2" event={"ID":"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e","Type":"ContainerDied","Data":"f8b5b75a90c3282cba2d63ad4a4db9a66108a70308efd9f37d5cc60c88e00d5a"} Mar 10 19:42:04 crc kubenswrapper[4861]: I0310 19:42:04.704142 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:04 crc kubenswrapper[4861]: I0310 19:42:04.813160 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svbzb\" (UniqueName: \"kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb\") pod \"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e\" (UID: \"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e\") " Mar 10 19:42:04 crc kubenswrapper[4861]: I0310 19:42:04.823477 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb" (OuterVolumeSpecName: "kube-api-access-svbzb") pod "0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" (UID: "0a8f6013-54b6-4424-bcc5-6c72a9a1a60e"). InnerVolumeSpecName "kube-api-access-svbzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:42:04 crc kubenswrapper[4861]: I0310 19:42:04.917775 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svbzb\" (UniqueName: \"kubernetes.io/projected/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e-kube-api-access-svbzb\") on node \"crc\" DevicePath \"\"" Mar 10 19:42:04 crc kubenswrapper[4861]: I0310 19:42:04.958698 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:42:04 crc kubenswrapper[4861]: E0310 19:42:04.959232 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:42:05 crc kubenswrapper[4861]: I0310 19:42:05.279971 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552862-28np2" event={"ID":"0a8f6013-54b6-4424-bcc5-6c72a9a1a60e","Type":"ContainerDied","Data":"e22afe43b32bde366f4b272805734f4d717286e2bd1c8a5e994a88227c1d82f6"} Mar 10 19:42:05 crc kubenswrapper[4861]: I0310 19:42:05.280046 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22afe43b32bde366f4b272805734f4d717286e2bd1c8a5e994a88227c1d82f6" Mar 10 19:42:05 crc kubenswrapper[4861]: I0310 19:42:05.280057 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552862-28np2" Mar 10 19:42:05 crc kubenswrapper[4861]: I0310 19:42:05.807775 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552856-vxtbd"] Mar 10 19:42:05 crc kubenswrapper[4861]: I0310 19:42:05.818367 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552856-vxtbd"] Mar 10 19:42:06 crc kubenswrapper[4861]: I0310 19:42:06.973674 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7" path="/var/lib/kubelet/pods/ccbb80f0-69cb-47d9-ae5d-eb170d52fdb7/volumes" Mar 10 19:42:19 crc kubenswrapper[4861]: I0310 19:42:19.958644 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:42:19 crc kubenswrapper[4861]: E0310 19:42:19.959679 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:42:34 crc kubenswrapper[4861]: I0310 19:42:34.958670 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:42:34 crc kubenswrapper[4861]: E0310 19:42:34.959849 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:42:45 crc kubenswrapper[4861]: I0310 19:42:45.958861 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:42:45 crc kubenswrapper[4861]: E0310 19:42:45.959864 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:42:54 crc kubenswrapper[4861]: I0310 19:42:54.896016 4861 scope.go:117] "RemoveContainer" containerID="084910547fce05ca98abcceae04edd76b3367099053ae59fa2f607449eea485f" Mar 10 19:42:56 crc kubenswrapper[4861]: I0310 19:42:56.966537 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:42:56 crc kubenswrapper[4861]: E0310 19:42:56.967327 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:43:09 crc kubenswrapper[4861]: I0310 19:43:09.958487 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:43:09 crc kubenswrapper[4861]: E0310 19:43:09.960334 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:43:20 crc kubenswrapper[4861]: I0310 19:43:20.959502 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:43:20 crc kubenswrapper[4861]: E0310 19:43:20.962426 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:43:32 crc kubenswrapper[4861]: I0310 19:43:32.990220 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:43:32 crc kubenswrapper[4861]: E0310 19:43:32.994314 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:43:46 crc kubenswrapper[4861]: I0310 19:43:46.961238 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:43:46 crc kubenswrapper[4861]: E0310 19:43:46.961925 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.162848 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552864-4l8w4"] Mar 10 19:44:00 crc kubenswrapper[4861]: E0310 19:44:00.163957 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" containerName="oc" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.163978 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" containerName="oc" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.164250 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" containerName="oc" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.165130 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.168351 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.169966 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.172472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25lp\" (UniqueName: \"kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp\") pod \"auto-csr-approver-29552864-4l8w4\" (UID: \"25fa771b-ad83-484d-b82c-ff606764032f\") " pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.176304 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.182335 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552864-4l8w4"] Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.274055 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25lp\" (UniqueName: \"kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp\") pod \"auto-csr-approver-29552864-4l8w4\" (UID: \"25fa771b-ad83-484d-b82c-ff606764032f\") " pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.306259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25lp\" (UniqueName: \"kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp\") pod \"auto-csr-approver-29552864-4l8w4\" (UID: \"25fa771b-ad83-484d-b82c-ff606764032f\") " pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.500299 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.808588 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552864-4l8w4"] Mar 10 19:44:00 crc kubenswrapper[4861]: I0310 19:44:00.809172 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:44:01 crc kubenswrapper[4861]: I0310 19:44:01.510799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" event={"ID":"25fa771b-ad83-484d-b82c-ff606764032f","Type":"ContainerStarted","Data":"142ffab4a4dbf0978e22e7d9c8c0a79ec2b142737076930cdc0eb54fab451995"} Mar 10 19:44:01 crc kubenswrapper[4861]: I0310 19:44:01.958688 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:44:01 crc kubenswrapper[4861]: E0310 19:44:01.959727 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:03 crc kubenswrapper[4861]: I0310 19:44:03.534101 4861 generic.go:334] "Generic (PLEG): container finished" podID="25fa771b-ad83-484d-b82c-ff606764032f" containerID="2061ff19a10235bc6da631d2b0822fd074f799ba0a102aee2c2b674ec83e1868" exitCode=0 Mar 10 19:44:03 crc kubenswrapper[4861]: I0310 19:44:03.534220 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" event={"ID":"25fa771b-ad83-484d-b82c-ff606764032f","Type":"ContainerDied","Data":"2061ff19a10235bc6da631d2b0822fd074f799ba0a102aee2c2b674ec83e1868"} Mar 10 19:44:04 crc kubenswrapper[4861]: I0310 19:44:04.987141 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.070447 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b25lp\" (UniqueName: \"kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp\") pod \"25fa771b-ad83-484d-b82c-ff606764032f\" (UID: \"25fa771b-ad83-484d-b82c-ff606764032f\") " Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.079920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp" (OuterVolumeSpecName: "kube-api-access-b25lp") pod "25fa771b-ad83-484d-b82c-ff606764032f" (UID: "25fa771b-ad83-484d-b82c-ff606764032f"). InnerVolumeSpecName "kube-api-access-b25lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.173513 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b25lp\" (UniqueName: \"kubernetes.io/projected/25fa771b-ad83-484d-b82c-ff606764032f-kube-api-access-b25lp\") on node \"crc\" DevicePath \"\"" Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.560606 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" event={"ID":"25fa771b-ad83-484d-b82c-ff606764032f","Type":"ContainerDied","Data":"142ffab4a4dbf0978e22e7d9c8c0a79ec2b142737076930cdc0eb54fab451995"} Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.560667 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="142ffab4a4dbf0978e22e7d9c8c0a79ec2b142737076930cdc0eb54fab451995" Mar 10 19:44:05 crc kubenswrapper[4861]: I0310 19:44:05.560684 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552864-4l8w4" Mar 10 19:44:06 crc kubenswrapper[4861]: I0310 19:44:06.072435 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552858-mtrc6"] Mar 10 19:44:06 crc kubenswrapper[4861]: I0310 19:44:06.076925 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552858-mtrc6"] Mar 10 19:44:06 crc kubenswrapper[4861]: I0310 19:44:06.972297 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0620f16-6cf4-4d06-88a4-48c8821d5113" path="/var/lib/kubelet/pods/b0620f16-6cf4-4d06-88a4-48c8821d5113/volumes" Mar 10 19:44:16 crc kubenswrapper[4861]: I0310 19:44:16.966152 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:44:16 crc kubenswrapper[4861]: E0310 19:44:16.967260 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:27 crc kubenswrapper[4861]: I0310 19:44:27.958140 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:44:27 crc kubenswrapper[4861]: E0310 19:44:27.959085 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.049497 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:44:36 crc kubenswrapper[4861]: E0310 19:44:36.051274 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fa771b-ad83-484d-b82c-ff606764032f" containerName="oc" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.051334 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fa771b-ad83-484d-b82c-ff606764032f" containerName="oc" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.051766 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fa771b-ad83-484d-b82c-ff606764032f" containerName="oc" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.060308 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.068352 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.183772 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.183903 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.183970 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2gk\" (UniqueName: \"kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.285309 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.285411 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.285459 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2gk\" (UniqueName: \"kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.286006 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.286158 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.307805 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2gk\" (UniqueName: \"kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk\") pod \"redhat-operators-wsgqx\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.393402 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.710823 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:44:36 crc kubenswrapper[4861]: I0310 19:44:36.837615 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerStarted","Data":"275c623197e3ce41ebda9cd50493085018102e81717c53f90b889889d471d271"} Mar 10 19:44:37 crc kubenswrapper[4861]: I0310 19:44:37.849950 4861 generic.go:334] "Generic (PLEG): container finished" podID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerID="b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421" exitCode=0 Mar 10 19:44:37 crc kubenswrapper[4861]: I0310 19:44:37.849996 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerDied","Data":"b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421"} Mar 10 19:44:38 crc kubenswrapper[4861]: I0310 19:44:38.861391 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerStarted","Data":"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7"} Mar 10 19:44:39 crc kubenswrapper[4861]: I0310 19:44:39.872422 4861 generic.go:334] "Generic (PLEG): container finished" podID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerID="11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7" exitCode=0 Mar 10 19:44:39 crc kubenswrapper[4861]: I0310 19:44:39.872532 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerDied","Data":"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7"} Mar 10 19:44:41 crc kubenswrapper[4861]: I0310 19:44:41.889319 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerStarted","Data":"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7"} Mar 10 19:44:41 crc kubenswrapper[4861]: I0310 19:44:41.926755 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsgqx" podStartSLOduration=2.849718774 podStartE2EDuration="5.926705509s" podCreationTimestamp="2026-03-10 19:44:36 +0000 UTC" firstStartedPulling="2026-03-10 19:44:37.853060888 +0000 UTC m=+3421.616496848" lastFinishedPulling="2026-03-10 19:44:40.930047593 +0000 UTC m=+3424.693483583" observedRunningTime="2026-03-10 19:44:41.918124498 +0000 UTC m=+3425.681560478" watchObservedRunningTime="2026-03-10 19:44:41.926705509 +0000 UTC m=+3425.690141499" Mar 10 19:44:41 crc kubenswrapper[4861]: I0310 19:44:41.959373 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:44:41 crc kubenswrapper[4861]: E0310 19:44:41.959756 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:46 crc kubenswrapper[4861]: I0310 19:44:46.394292 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:46 crc kubenswrapper[4861]: I0310 19:44:46.396170 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:47 crc kubenswrapper[4861]: I0310 19:44:47.450518 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsgqx" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="registry-server" probeResult="failure" output=< Mar 10 19:44:47 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 19:44:47 crc kubenswrapper[4861]: > Mar 10 19:44:52 crc kubenswrapper[4861]: I0310 19:44:52.959072 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:44:52 crc kubenswrapper[4861]: E0310 19:44:52.959969 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:44:55 crc kubenswrapper[4861]: I0310 19:44:55.022982 4861 scope.go:117] "RemoveContainer" containerID="dce78c005ab09647f63b2b4aac6737f128b5bc6e21e77121fbbe871822090713" Mar 10 19:44:56 crc kubenswrapper[4861]: I0310 19:44:56.464263 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:44:56 crc kubenswrapper[4861]: I0310 19:44:56.533474 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.053514 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.054234 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsgqx" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="registry-server" containerID="cri-o://5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7" gracePeriod=2 Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.162497 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz"] Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.174352 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz"] Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.174494 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.182095 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.182447 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.344195 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnb8c\" (UniqueName: \"kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.344342 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.344520 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.448530 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnb8c\" (UniqueName: \"kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.448598 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.448640 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.450555 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.459168 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.469597 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnb8c\" (UniqueName: \"kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c\") pod \"collect-profiles-29552865-k2rtz\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.521302 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.568970 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.652662 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k2gk\" (UniqueName: \"kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk\") pod \"c7c34557-c1c6-45e1-a4db-153d55c66277\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.652781 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities\") pod \"c7c34557-c1c6-45e1-a4db-153d55c66277\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.652826 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content\") pod \"c7c34557-c1c6-45e1-a4db-153d55c66277\" (UID: \"c7c34557-c1c6-45e1-a4db-153d55c66277\") " Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.654375 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities" (OuterVolumeSpecName: "utilities") pod "c7c34557-c1c6-45e1-a4db-153d55c66277" (UID: "c7c34557-c1c6-45e1-a4db-153d55c66277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.658604 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk" (OuterVolumeSpecName: "kube-api-access-6k2gk") pod "c7c34557-c1c6-45e1-a4db-153d55c66277" (UID: "c7c34557-c1c6-45e1-a4db-153d55c66277"). InnerVolumeSpecName "kube-api-access-6k2gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.756088 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k2gk\" (UniqueName: \"kubernetes.io/projected/c7c34557-c1c6-45e1-a4db-153d55c66277-kube-api-access-6k2gk\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.756168 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.834098 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7c34557-c1c6-45e1-a4db-153d55c66277" (UID: "c7c34557-c1c6-45e1-a4db-153d55c66277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:45:00 crc kubenswrapper[4861]: I0310 19:45:00.857561 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c34557-c1c6-45e1-a4db-153d55c66277-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.044056 4861 generic.go:334] "Generic (PLEG): container finished" podID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerID="5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7" exitCode=0 Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.044106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerDied","Data":"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7"} Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.044138 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsgqx" event={"ID":"c7c34557-c1c6-45e1-a4db-153d55c66277","Type":"ContainerDied","Data":"275c623197e3ce41ebda9cd50493085018102e81717c53f90b889889d471d271"} Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.044144 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsgqx" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.044159 4861 scope.go:117] "RemoveContainer" containerID="5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.070759 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.072692 4861 scope.go:117] "RemoveContainer" containerID="11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.077403 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsgqx"] Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.092555 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz"] Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.103321 4861 scope.go:117] "RemoveContainer" containerID="b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421" Mar 10 19:45:01 crc kubenswrapper[4861]: W0310 19:45:01.106524 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d3018b_fb64_43b4_9b5b_b25be20cf05f.slice/crio-2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e WatchSource:0}: Error finding container 2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e: Status 404 returned error can't find the container with id 2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.120607 4861 scope.go:117] "RemoveContainer" containerID="5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7" Mar 10 19:45:01 crc kubenswrapper[4861]: E0310 19:45:01.120975 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7\": container with ID starting with 5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7 not found: ID does not exist" containerID="5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.121014 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7"} err="failed to get container status \"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7\": rpc error: code = NotFound desc = could not find container \"5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7\": container with ID starting with 5412473480fed697a3d2d3fe6b741081c1480385959af6fee65ed77dc83c3af7 not found: ID does not exist" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.121039 4861 scope.go:117] "RemoveContainer" containerID="11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7" Mar 10 19:45:01 crc kubenswrapper[4861]: E0310 19:45:01.121445 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7\": container with ID starting with 11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7 not found: ID does not exist" containerID="11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.121474 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7"} err="failed to get container status \"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7\": rpc error: code = NotFound desc = could not find container \"11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7\": container with ID starting with 11b59f65071a3d83333a523ed46f11d3edbc1c7af4f20c1e593d2b9bd822b0a7 not found: ID does not exist" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.121491 4861 scope.go:117] "RemoveContainer" containerID="b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421" Mar 10 19:45:01 crc kubenswrapper[4861]: E0310 19:45:01.121782 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421\": container with ID starting with b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421 not found: ID does not exist" containerID="b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421" Mar 10 19:45:01 crc kubenswrapper[4861]: I0310 19:45:01.121812 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421"} err="failed to get container status \"b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421\": rpc error: code = NotFound desc = could not find container \"b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421\": container with ID starting with b02c636a2bc1f7e9eec50bdeb615bffb32f2849f68c337ce520df353cc135421 not found: ID does not exist" Mar 10 19:45:02 crc kubenswrapper[4861]: I0310 19:45:02.056517 4861 generic.go:334] "Generic (PLEG): container finished" podID="04d3018b-fb64-43b4-9b5b-b25be20cf05f" containerID="83fb41570d20f9736efbf7e862f77c7b7784626efeac27a74504134db5d2a9f2" exitCode=0 Mar 10 19:45:02 crc kubenswrapper[4861]: I0310 19:45:02.056601 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" event={"ID":"04d3018b-fb64-43b4-9b5b-b25be20cf05f","Type":"ContainerDied","Data":"83fb41570d20f9736efbf7e862f77c7b7784626efeac27a74504134db5d2a9f2"} Mar 10 19:45:02 crc kubenswrapper[4861]: I0310 19:45:02.056975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" event={"ID":"04d3018b-fb64-43b4-9b5b-b25be20cf05f","Type":"ContainerStarted","Data":"2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e"} Mar 10 19:45:02 crc kubenswrapper[4861]: I0310 19:45:02.974087 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" path="/var/lib/kubelet/pods/c7c34557-c1c6-45e1-a4db-153d55c66277/volumes" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.448317 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.617445 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume\") pod \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.617855 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnb8c\" (UniqueName: \"kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c\") pod \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.617941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume\") pod \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\" (UID: \"04d3018b-fb64-43b4-9b5b-b25be20cf05f\") " Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.619009 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume" (OuterVolumeSpecName: "config-volume") pod "04d3018b-fb64-43b4-9b5b-b25be20cf05f" (UID: "04d3018b-fb64-43b4-9b5b-b25be20cf05f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.623830 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04d3018b-fb64-43b4-9b5b-b25be20cf05f" (UID: "04d3018b-fb64-43b4-9b5b-b25be20cf05f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.626511 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c" (OuterVolumeSpecName: "kube-api-access-qnb8c") pod "04d3018b-fb64-43b4-9b5b-b25be20cf05f" (UID: "04d3018b-fb64-43b4-9b5b-b25be20cf05f"). InnerVolumeSpecName "kube-api-access-qnb8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.720328 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnb8c\" (UniqueName: \"kubernetes.io/projected/04d3018b-fb64-43b4-9b5b-b25be20cf05f-kube-api-access-qnb8c\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.720387 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d3018b-fb64-43b4-9b5b-b25be20cf05f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:03 crc kubenswrapper[4861]: I0310 19:45:03.720407 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d3018b-fb64-43b4-9b5b-b25be20cf05f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.077526 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" event={"ID":"04d3018b-fb64-43b4-9b5b-b25be20cf05f","Type":"ContainerDied","Data":"2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e"} Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.077583 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2427d02500b72a7c8da78796392ebefa0a72980d094ded6243af0e6f29f0bb8e" Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.078568 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz" Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.546093 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c"] Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.556888 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552820-ng94c"] Mar 10 19:45:04 crc kubenswrapper[4861]: I0310 19:45:04.984777 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7e5d0b-6c31-4cb3-9162-337466c63f12" path="/var/lib/kubelet/pods/bd7e5d0b-6c31-4cb3-9162-337466c63f12/volumes" Mar 10 19:45:06 crc kubenswrapper[4861]: I0310 19:45:06.967154 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:45:06 crc kubenswrapper[4861]: E0310 19:45:06.967740 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:45:21 crc kubenswrapper[4861]: I0310 19:45:21.958935 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:45:21 crc kubenswrapper[4861]: E0310 19:45:21.959860 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:45:33 crc kubenswrapper[4861]: I0310 19:45:33.958123 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:45:33 crc kubenswrapper[4861]: E0310 19:45:33.959931 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:45:45 crc kubenswrapper[4861]: I0310 19:45:45.957985 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:45:45 crc kubenswrapper[4861]: E0310 19:45:45.959013 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:45:55 crc kubenswrapper[4861]: I0310 19:45:55.126168 4861 scope.go:117] "RemoveContainer" containerID="c0d336064e9831a1391c24b511f44491c5ac01c912730ce7ba147a185fbd3cd0" Mar 10 19:45:57 crc kubenswrapper[4861]: I0310 19:45:57.958981 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:45:57 crc kubenswrapper[4861]: E0310 19:45:57.959780 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.167168 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552866-lprmd"] Mar 10 19:46:00 crc kubenswrapper[4861]: E0310 19:46:00.203498 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="extract-utilities" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.203548 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="extract-utilities" Mar 10 19:46:00 crc kubenswrapper[4861]: E0310 19:46:00.203579 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d3018b-fb64-43b4-9b5b-b25be20cf05f" containerName="collect-profiles" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.203594 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d3018b-fb64-43b4-9b5b-b25be20cf05f" containerName="collect-profiles" Mar 10 19:46:00 crc kubenswrapper[4861]: E0310 19:46:00.203619 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="registry-server" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.203633 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="registry-server" Mar 10 19:46:00 crc kubenswrapper[4861]: E0310 19:46:00.203697 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="extract-content" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.203809 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="extract-content" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.204053 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c34557-c1c6-45e1-a4db-153d55c66277" containerName="registry-server" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.204092 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d3018b-fb64-43b4-9b5b-b25be20cf05f" containerName="collect-profiles" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.204576 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552866-lprmd"] Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.204670 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.206981 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.209239 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.209294 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.310557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd545\" (UniqueName: \"kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545\") pod \"auto-csr-approver-29552866-lprmd\" (UID: \"2433ead5-1028-47df-9265-419d3820c352\") " pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.411851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd545\" (UniqueName: \"kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545\") pod \"auto-csr-approver-29552866-lprmd\" (UID: \"2433ead5-1028-47df-9265-419d3820c352\") " pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.437663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd545\" (UniqueName: \"kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545\") pod \"auto-csr-approver-29552866-lprmd\" (UID: \"2433ead5-1028-47df-9265-419d3820c352\") " pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.536375 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:00 crc kubenswrapper[4861]: I0310 19:46:00.835260 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552866-lprmd"] Mar 10 19:46:01 crc kubenswrapper[4861]: I0310 19:46:01.593769 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552866-lprmd" event={"ID":"2433ead5-1028-47df-9265-419d3820c352","Type":"ContainerStarted","Data":"acf6a3fc1e71ae69af1a7f6bd8f9547baf857d4387077992d78ef34f5e0b7367"} Mar 10 19:46:03 crc kubenswrapper[4861]: I0310 19:46:03.616674 4861 generic.go:334] "Generic (PLEG): container finished" podID="2433ead5-1028-47df-9265-419d3820c352" containerID="a62d8d859e639a2cf5321b4cba7335c43c50ecaa433a0194dd1a7b8d03429412" exitCode=0 Mar 10 19:46:03 crc kubenswrapper[4861]: I0310 19:46:03.616770 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552866-lprmd" event={"ID":"2433ead5-1028-47df-9265-419d3820c352","Type":"ContainerDied","Data":"a62d8d859e639a2cf5321b4cba7335c43c50ecaa433a0194dd1a7b8d03429412"} Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.040634 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.210347 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd545\" (UniqueName: \"kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545\") pod \"2433ead5-1028-47df-9265-419d3820c352\" (UID: \"2433ead5-1028-47df-9265-419d3820c352\") " Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.219148 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545" (OuterVolumeSpecName: "kube-api-access-rd545") pod "2433ead5-1028-47df-9265-419d3820c352" (UID: "2433ead5-1028-47df-9265-419d3820c352"). InnerVolumeSpecName "kube-api-access-rd545". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.312943 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd545\" (UniqueName: \"kubernetes.io/projected/2433ead5-1028-47df-9265-419d3820c352-kube-api-access-rd545\") on node \"crc\" DevicePath \"\"" Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.639374 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552866-lprmd" event={"ID":"2433ead5-1028-47df-9265-419d3820c352","Type":"ContainerDied","Data":"acf6a3fc1e71ae69af1a7f6bd8f9547baf857d4387077992d78ef34f5e0b7367"} Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.639448 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf6a3fc1e71ae69af1a7f6bd8f9547baf857d4387077992d78ef34f5e0b7367" Mar 10 19:46:05 crc kubenswrapper[4861]: I0310 19:46:05.639547 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552866-lprmd" Mar 10 19:46:06 crc kubenswrapper[4861]: I0310 19:46:06.128857 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552860-qcxxd"] Mar 10 19:46:06 crc kubenswrapper[4861]: I0310 19:46:06.138158 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552860-qcxxd"] Mar 10 19:46:06 crc kubenswrapper[4861]: I0310 19:46:06.979409 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e1e534-3448-4c6d-aa2b-88fde11e4996" path="/var/lib/kubelet/pods/97e1e534-3448-4c6d-aa2b-88fde11e4996/volumes" Mar 10 19:46:09 crc kubenswrapper[4861]: I0310 19:46:09.959963 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:46:09 crc kubenswrapper[4861]: E0310 19:46:09.960677 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:46:22 crc kubenswrapper[4861]: I0310 19:46:22.958944 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:46:23 crc kubenswrapper[4861]: I0310 19:46:23.839254 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615"} Mar 10 19:46:55 crc kubenswrapper[4861]: I0310 19:46:55.217231 4861 scope.go:117] "RemoveContainer" containerID="006e9cae8eb2539480672ac5ccec13f59bb3835e58a2c030d759278a5d7dfeb4" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.079483 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:21 crc kubenswrapper[4861]: E0310 19:47:21.081611 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2433ead5-1028-47df-9265-419d3820c352" containerName="oc" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.081642 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2433ead5-1028-47df-9265-419d3820c352" containerName="oc" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.081925 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2433ead5-1028-47df-9265-419d3820c352" containerName="oc" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.083507 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.106308 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.147669 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.147801 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.147979 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62gh\" (UniqueName: \"kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.249406 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62gh\" (UniqueName: \"kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.249612 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.249673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.250406 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.250456 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.273834 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62gh\" (UniqueName: \"kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh\") pod \"redhat-marketplace-kblnq\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.415653 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:21 crc kubenswrapper[4861]: I0310 19:47:21.862827 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:21 crc kubenswrapper[4861]: W0310 19:47:21.867960 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod180be34d_fa26_46cc_bd23_24adfdc6c869.slice/crio-48a7e51ce8004ab3546e90229135d377bdc71d4e750a833a563e3b5361304016 WatchSource:0}: Error finding container 48a7e51ce8004ab3546e90229135d377bdc71d4e750a833a563e3b5361304016: Status 404 returned error can't find the container with id 48a7e51ce8004ab3546e90229135d377bdc71d4e750a833a563e3b5361304016 Mar 10 19:47:22 crc kubenswrapper[4861]: I0310 19:47:22.395881 4861 generic.go:334] "Generic (PLEG): container finished" podID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerID="96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d" exitCode=0 Mar 10 19:47:22 crc kubenswrapper[4861]: I0310 19:47:22.395987 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerDied","Data":"96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d"} Mar 10 19:47:22 crc kubenswrapper[4861]: I0310 19:47:22.396292 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerStarted","Data":"48a7e51ce8004ab3546e90229135d377bdc71d4e750a833a563e3b5361304016"} Mar 10 19:47:23 crc kubenswrapper[4861]: I0310 19:47:23.408561 4861 generic.go:334] "Generic (PLEG): container finished" podID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerID="e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c" exitCode=0 Mar 10 19:47:23 crc kubenswrapper[4861]: I0310 19:47:23.408626 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerDied","Data":"e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c"} Mar 10 19:47:25 crc kubenswrapper[4861]: I0310 19:47:25.430801 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerStarted","Data":"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c"} Mar 10 19:47:25 crc kubenswrapper[4861]: I0310 19:47:25.464899 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kblnq" podStartSLOduration=2.609666379 podStartE2EDuration="4.46486278s" podCreationTimestamp="2026-03-10 19:47:21 +0000 UTC" firstStartedPulling="2026-03-10 19:47:22.400251388 +0000 UTC m=+3586.163687378" lastFinishedPulling="2026-03-10 19:47:24.255447779 +0000 UTC m=+3588.018883779" observedRunningTime="2026-03-10 19:47:25.460160022 +0000 UTC m=+3589.223596032" watchObservedRunningTime="2026-03-10 19:47:25.46486278 +0000 UTC m=+3589.228298790" Mar 10 19:47:31 crc kubenswrapper[4861]: I0310 19:47:31.416765 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:31 crc kubenswrapper[4861]: I0310 19:47:31.417389 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:31 crc kubenswrapper[4861]: I0310 19:47:31.499177 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:31 crc kubenswrapper[4861]: I0310 19:47:31.581295 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:31 crc kubenswrapper[4861]: I0310 19:47:31.748423 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:33 crc kubenswrapper[4861]: I0310 19:47:33.508851 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kblnq" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="registry-server" containerID="cri-o://5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c" gracePeriod=2 Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.028272 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.174711 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities\") pod \"180be34d-fa26-46cc-bd23-24adfdc6c869\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.174845 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62gh\" (UniqueName: \"kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh\") pod \"180be34d-fa26-46cc-bd23-24adfdc6c869\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.175112 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content\") pod \"180be34d-fa26-46cc-bd23-24adfdc6c869\" (UID: \"180be34d-fa26-46cc-bd23-24adfdc6c869\") " Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.175760 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities" (OuterVolumeSpecName: "utilities") pod "180be34d-fa26-46cc-bd23-24adfdc6c869" (UID: "180be34d-fa26-46cc-bd23-24adfdc6c869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.188297 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh" (OuterVolumeSpecName: "kube-api-access-x62gh") pod "180be34d-fa26-46cc-bd23-24adfdc6c869" (UID: "180be34d-fa26-46cc-bd23-24adfdc6c869"). InnerVolumeSpecName "kube-api-access-x62gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.219456 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "180be34d-fa26-46cc-bd23-24adfdc6c869" (UID: "180be34d-fa26-46cc-bd23-24adfdc6c869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.277995 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.278050 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x62gh\" (UniqueName: \"kubernetes.io/projected/180be34d-fa26-46cc-bd23-24adfdc6c869-kube-api-access-x62gh\") on node \"crc\" DevicePath \"\"" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.278067 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180be34d-fa26-46cc-bd23-24adfdc6c869-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.521278 4861 generic.go:334] "Generic (PLEG): container finished" podID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerID="5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c" exitCode=0 Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.521341 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerDied","Data":"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c"} Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.521383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kblnq" event={"ID":"180be34d-fa26-46cc-bd23-24adfdc6c869","Type":"ContainerDied","Data":"48a7e51ce8004ab3546e90229135d377bdc71d4e750a833a563e3b5361304016"} Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.521415 4861 scope.go:117] "RemoveContainer" containerID="5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.521411 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kblnq" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.561864 4861 scope.go:117] "RemoveContainer" containerID="e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.588417 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.594945 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kblnq"] Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.608009 4861 scope.go:117] "RemoveContainer" containerID="96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.637500 4861 scope.go:117] "RemoveContainer" containerID="5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c" Mar 10 19:47:34 crc kubenswrapper[4861]: E0310 19:47:34.638101 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c\": container with ID starting with 5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c not found: ID does not exist" containerID="5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.638158 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c"} err="failed to get container status \"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c\": rpc error: code = NotFound desc = could not find container \"5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c\": container with ID starting with 5456f0bee04731f43abef056f8baa19149fb27f4adefd2ce3ef9c56632cae22c not found: ID does not exist" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.638196 4861 scope.go:117] "RemoveContainer" containerID="e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c" Mar 10 19:47:34 crc kubenswrapper[4861]: E0310 19:47:34.638606 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c\": container with ID starting with e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c not found: ID does not exist" containerID="e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.638652 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c"} err="failed to get container status \"e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c\": rpc error: code = NotFound desc = could not find container \"e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c\": container with ID starting with e00d21d64a6c710cd419bd6d0430b84242d0ac03d389b466dc566a777771c48c not found: ID does not exist" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.638680 4861 scope.go:117] "RemoveContainer" containerID="96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d" Mar 10 19:47:34 crc kubenswrapper[4861]: E0310 19:47:34.639485 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d\": container with ID starting with 96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d not found: ID does not exist" containerID="96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.639628 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d"} err="failed to get container status \"96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d\": rpc error: code = NotFound desc = could not find container \"96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d\": container with ID starting with 96f771e865daa41b84465a21b0d441f8401c56faee8e89ce068ae601a5abe28d not found: ID does not exist" Mar 10 19:47:34 crc kubenswrapper[4861]: I0310 19:47:34.974049 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" path="/var/lib/kubelet/pods/180be34d-fa26-46cc-bd23-24adfdc6c869/volumes" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.155234 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552868-lkm9p"] Mar 10 19:48:00 crc kubenswrapper[4861]: E0310 19:48:00.156655 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="extract-utilities" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.156679 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="extract-utilities" Mar 10 19:48:00 crc kubenswrapper[4861]: E0310 19:48:00.156705 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="extract-content" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.156746 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="extract-content" Mar 10 19:48:00 crc kubenswrapper[4861]: E0310 19:48:00.156765 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="registry-server" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.156778 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="registry-server" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.157071 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="180be34d-fa26-46cc-bd23-24adfdc6c869" containerName="registry-server" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.157842 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.161973 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.162290 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.162572 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.171651 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552868-lkm9p"] Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.333516 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcjt\" (UniqueName: \"kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt\") pod \"auto-csr-approver-29552868-lkm9p\" (UID: \"e8fb2abb-01ba-4944-b135-7e794cafb741\") " pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.435422 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcjt\" (UniqueName: \"kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt\") pod \"auto-csr-approver-29552868-lkm9p\" (UID: \"e8fb2abb-01ba-4944-b135-7e794cafb741\") " pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.463299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcjt\" (UniqueName: \"kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt\") pod \"auto-csr-approver-29552868-lkm9p\" (UID: \"e8fb2abb-01ba-4944-b135-7e794cafb741\") " pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.529417 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:00 crc kubenswrapper[4861]: I0310 19:48:00.801283 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552868-lkm9p"] Mar 10 19:48:01 crc kubenswrapper[4861]: I0310 19:48:01.814374 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" event={"ID":"e8fb2abb-01ba-4944-b135-7e794cafb741","Type":"ContainerStarted","Data":"b3a2bd07911acac72791711cd5f88157520bd2c5905c9bfd78009502454e9f4e"} Mar 10 19:48:02 crc kubenswrapper[4861]: I0310 19:48:02.826354 4861 generic.go:334] "Generic (PLEG): container finished" podID="e8fb2abb-01ba-4944-b135-7e794cafb741" containerID="140a19a168d003baec9049ba97a1784c07b74e5c69cab59c769d08afb4574d41" exitCode=0 Mar 10 19:48:02 crc kubenswrapper[4861]: I0310 19:48:02.826405 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" event={"ID":"e8fb2abb-01ba-4944-b135-7e794cafb741","Type":"ContainerDied","Data":"140a19a168d003baec9049ba97a1784c07b74e5c69cab59c769d08afb4574d41"} Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.235202 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.405428 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmcjt\" (UniqueName: \"kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt\") pod \"e8fb2abb-01ba-4944-b135-7e794cafb741\" (UID: \"e8fb2abb-01ba-4944-b135-7e794cafb741\") " Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.415766 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt" (OuterVolumeSpecName: "kube-api-access-cmcjt") pod "e8fb2abb-01ba-4944-b135-7e794cafb741" (UID: "e8fb2abb-01ba-4944-b135-7e794cafb741"). InnerVolumeSpecName "kube-api-access-cmcjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.507921 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmcjt\" (UniqueName: \"kubernetes.io/projected/e8fb2abb-01ba-4944-b135-7e794cafb741-kube-api-access-cmcjt\") on node \"crc\" DevicePath \"\"" Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.848477 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" event={"ID":"e8fb2abb-01ba-4944-b135-7e794cafb741","Type":"ContainerDied","Data":"b3a2bd07911acac72791711cd5f88157520bd2c5905c9bfd78009502454e9f4e"} Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.848546 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a2bd07911acac72791711cd5f88157520bd2c5905c9bfd78009502454e9f4e" Mar 10 19:48:04 crc kubenswrapper[4861]: I0310 19:48:04.848591 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552868-lkm9p" Mar 10 19:48:05 crc kubenswrapper[4861]: I0310 19:48:05.330504 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552862-28np2"] Mar 10 19:48:05 crc kubenswrapper[4861]: I0310 19:48:05.339501 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552862-28np2"] Mar 10 19:48:06 crc kubenswrapper[4861]: I0310 19:48:06.972992 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8f6013-54b6-4424-bcc5-6c72a9a1a60e" path="/var/lib/kubelet/pods/0a8f6013-54b6-4424-bcc5-6c72a9a1a60e/volumes" Mar 10 19:48:51 crc kubenswrapper[4861]: I0310 19:48:51.992730 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:48:51 crc kubenswrapper[4861]: I0310 19:48:51.993526 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:48:55 crc kubenswrapper[4861]: I0310 19:48:55.363476 4861 scope.go:117] "RemoveContainer" containerID="f8b5b75a90c3282cba2d63ad4a4db9a66108a70308efd9f37d5cc60c88e00d5a" Mar 10 19:49:21 crc kubenswrapper[4861]: I0310 19:49:21.992021 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:49:21 crc kubenswrapper[4861]: I0310 19:49:21.992830 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:49:51 crc kubenswrapper[4861]: I0310 19:49:51.992345 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:49:51 crc kubenswrapper[4861]: I0310 19:49:51.993181 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:49:51 crc kubenswrapper[4861]: I0310 19:49:51.993246 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:49:51 crc kubenswrapper[4861]: I0310 19:49:51.994399 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:49:51 crc kubenswrapper[4861]: I0310 19:49:51.994503 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615" gracePeriod=600 Mar 10 19:49:52 crc kubenswrapper[4861]: I0310 19:49:52.956506 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615" exitCode=0 Mar 10 19:49:52 crc kubenswrapper[4861]: I0310 19:49:52.956583 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615"} Mar 10 19:49:52 crc kubenswrapper[4861]: I0310 19:49:52.956995 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6"} Mar 10 19:49:52 crc kubenswrapper[4861]: I0310 19:49:52.957028 4861 scope.go:117] "RemoveContainer" containerID="badc305024bb48804f9edcbbbfb68dfba79ce39195a0e17ddfa2a9c4c010b995" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.167042 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552870-dk2km"] Mar 10 19:50:00 crc kubenswrapper[4861]: E0310 19:50:00.168145 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fb2abb-01ba-4944-b135-7e794cafb741" containerName="oc" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.168167 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fb2abb-01ba-4944-b135-7e794cafb741" containerName="oc" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.168424 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fb2abb-01ba-4944-b135-7e794cafb741" containerName="oc" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.169164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.178644 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552870-dk2km"] Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.200179 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.200298 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.201667 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.365069 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdgz\" (UniqueName: \"kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz\") pod \"auto-csr-approver-29552870-dk2km\" (UID: \"cc94b081-9b7e-497f-979b-d38019f9f99d\") " pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.467789 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdgz\" (UniqueName: \"kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz\") pod \"auto-csr-approver-29552870-dk2km\" (UID: \"cc94b081-9b7e-497f-979b-d38019f9f99d\") " pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.502896 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdgz\" (UniqueName: \"kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz\") pod \"auto-csr-approver-29552870-dk2km\" (UID: \"cc94b081-9b7e-497f-979b-d38019f9f99d\") " pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.524583 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.873939 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552870-dk2km"] Mar 10 19:50:00 crc kubenswrapper[4861]: I0310 19:50:00.875477 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:50:01 crc kubenswrapper[4861]: I0310 19:50:01.024848 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552870-dk2km" event={"ID":"cc94b081-9b7e-497f-979b-d38019f9f99d","Type":"ContainerStarted","Data":"760622dcd074a23904b9a5113229d3c0236592b8434ec4384b0b5876be4a5ad6"} Mar 10 19:50:03 crc kubenswrapper[4861]: I0310 19:50:03.048388 4861 generic.go:334] "Generic (PLEG): container finished" podID="cc94b081-9b7e-497f-979b-d38019f9f99d" containerID="0c538a4c7f039341c2b1dfa08002c54f10c334d50aab4fb180bd3e37a48e4dab" exitCode=0 Mar 10 19:50:03 crc kubenswrapper[4861]: I0310 19:50:03.049107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552870-dk2km" event={"ID":"cc94b081-9b7e-497f-979b-d38019f9f99d","Type":"ContainerDied","Data":"0c538a4c7f039341c2b1dfa08002c54f10c334d50aab4fb180bd3e37a48e4dab"} Mar 10 19:50:04 crc kubenswrapper[4861]: I0310 19:50:04.512430 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:04 crc kubenswrapper[4861]: I0310 19:50:04.638111 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdgz\" (UniqueName: \"kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz\") pod \"cc94b081-9b7e-497f-979b-d38019f9f99d\" (UID: \"cc94b081-9b7e-497f-979b-d38019f9f99d\") " Mar 10 19:50:04 crc kubenswrapper[4861]: I0310 19:50:04.646747 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz" (OuterVolumeSpecName: "kube-api-access-4xdgz") pod "cc94b081-9b7e-497f-979b-d38019f9f99d" (UID: "cc94b081-9b7e-497f-979b-d38019f9f99d"). InnerVolumeSpecName "kube-api-access-4xdgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:50:04 crc kubenswrapper[4861]: I0310 19:50:04.740503 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdgz\" (UniqueName: \"kubernetes.io/projected/cc94b081-9b7e-497f-979b-d38019f9f99d-kube-api-access-4xdgz\") on node \"crc\" DevicePath \"\"" Mar 10 19:50:05 crc kubenswrapper[4861]: I0310 19:50:05.071828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552870-dk2km" event={"ID":"cc94b081-9b7e-497f-979b-d38019f9f99d","Type":"ContainerDied","Data":"760622dcd074a23904b9a5113229d3c0236592b8434ec4384b0b5876be4a5ad6"} Mar 10 19:50:05 crc kubenswrapper[4861]: I0310 19:50:05.071882 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760622dcd074a23904b9a5113229d3c0236592b8434ec4384b0b5876be4a5ad6" Mar 10 19:50:05 crc kubenswrapper[4861]: I0310 19:50:05.071923 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552870-dk2km" Mar 10 19:50:05 crc kubenswrapper[4861]: I0310 19:50:05.616912 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552864-4l8w4"] Mar 10 19:50:05 crc kubenswrapper[4861]: I0310 19:50:05.627496 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552864-4l8w4"] Mar 10 19:50:06 crc kubenswrapper[4861]: I0310 19:50:06.973806 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fa771b-ad83-484d-b82c-ff606764032f" path="/var/lib/kubelet/pods/25fa771b-ad83-484d-b82c-ff606764032f/volumes" Mar 10 19:50:55 crc kubenswrapper[4861]: I0310 19:50:55.489139 4861 scope.go:117] "RemoveContainer" containerID="2061ff19a10235bc6da631d2b0822fd074f799ba0a102aee2c2b674ec83e1868" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.174346 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552872-qrbnr"] Mar 10 19:52:00 crc kubenswrapper[4861]: E0310 19:52:00.175675 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc94b081-9b7e-497f-979b-d38019f9f99d" containerName="oc" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.175701 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc94b081-9b7e-497f-979b-d38019f9f99d" containerName="oc" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.175989 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc94b081-9b7e-497f-979b-d38019f9f99d" containerName="oc" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.176941 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.179545 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.180598 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.182360 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.190548 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552872-qrbnr"] Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.237603 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5gc\" (UniqueName: \"kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc\") pod \"auto-csr-approver-29552872-qrbnr\" (UID: \"8734ed42-d043-4ac1-9282-fb905cd3cb36\") " pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.339371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5gc\" (UniqueName: \"kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc\") pod \"auto-csr-approver-29552872-qrbnr\" (UID: \"8734ed42-d043-4ac1-9282-fb905cd3cb36\") " pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.370919 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5gc\" (UniqueName: \"kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc\") pod \"auto-csr-approver-29552872-qrbnr\" (UID: \"8734ed42-d043-4ac1-9282-fb905cd3cb36\") " pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.544095 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:00 crc kubenswrapper[4861]: I0310 19:52:00.846617 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552872-qrbnr"] Mar 10 19:52:01 crc kubenswrapper[4861]: I0310 19:52:01.595674 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" event={"ID":"8734ed42-d043-4ac1-9282-fb905cd3cb36","Type":"ContainerStarted","Data":"455be364d8cc16676f49e15bcf023fc01a9fd8cd92882a50c17db79784066990"} Mar 10 19:52:03 crc kubenswrapper[4861]: I0310 19:52:03.609818 4861 generic.go:334] "Generic (PLEG): container finished" podID="8734ed42-d043-4ac1-9282-fb905cd3cb36" containerID="afa016784def1cc5cbbff78d2022f0924e1cb2809276ead1074ee27413cd03db" exitCode=0 Mar 10 19:52:03 crc kubenswrapper[4861]: I0310 19:52:03.609869 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" event={"ID":"8734ed42-d043-4ac1-9282-fb905cd3cb36","Type":"ContainerDied","Data":"afa016784def1cc5cbbff78d2022f0924e1cb2809276ead1074ee27413cd03db"} Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.041533 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.114652 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5gc\" (UniqueName: \"kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc\") pod \"8734ed42-d043-4ac1-9282-fb905cd3cb36\" (UID: \"8734ed42-d043-4ac1-9282-fb905cd3cb36\") " Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.123335 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc" (OuterVolumeSpecName: "kube-api-access-xg5gc") pod "8734ed42-d043-4ac1-9282-fb905cd3cb36" (UID: "8734ed42-d043-4ac1-9282-fb905cd3cb36"). InnerVolumeSpecName "kube-api-access-xg5gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.216477 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg5gc\" (UniqueName: \"kubernetes.io/projected/8734ed42-d043-4ac1-9282-fb905cd3cb36-kube-api-access-xg5gc\") on node \"crc\" DevicePath \"\"" Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.634129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" event={"ID":"8734ed42-d043-4ac1-9282-fb905cd3cb36","Type":"ContainerDied","Data":"455be364d8cc16676f49e15bcf023fc01a9fd8cd92882a50c17db79784066990"} Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.634189 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552872-qrbnr" Mar 10 19:52:05 crc kubenswrapper[4861]: I0310 19:52:05.634221 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455be364d8cc16676f49e15bcf023fc01a9fd8cd92882a50c17db79784066990" Mar 10 19:52:06 crc kubenswrapper[4861]: I0310 19:52:06.134176 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552866-lprmd"] Mar 10 19:52:06 crc kubenswrapper[4861]: I0310 19:52:06.146233 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552866-lprmd"] Mar 10 19:52:06 crc kubenswrapper[4861]: I0310 19:52:06.973593 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2433ead5-1028-47df-9265-419d3820c352" path="/var/lib/kubelet/pods/2433ead5-1028-47df-9265-419d3820c352/volumes" Mar 10 19:52:21 crc kubenswrapper[4861]: I0310 19:52:21.991753 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:52:21 crc kubenswrapper[4861]: I0310 19:52:21.992379 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:52:51 crc kubenswrapper[4861]: I0310 19:52:51.991901 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:52:51 crc kubenswrapper[4861]: I0310 19:52:51.992541 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:52:55 crc kubenswrapper[4861]: I0310 19:52:55.626193 4861 scope.go:117] "RemoveContainer" containerID="a62d8d859e639a2cf5321b4cba7335c43c50ecaa433a0194dd1a7b8d03429412" Mar 10 19:53:21 crc kubenswrapper[4861]: I0310 19:53:21.991833 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 19:53:21 crc kubenswrapper[4861]: I0310 19:53:21.992618 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 19:53:21 crc kubenswrapper[4861]: I0310 19:53:21.992685 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 19:53:21 crc kubenswrapper[4861]: I0310 19:53:21.993998 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 19:53:21 crc kubenswrapper[4861]: I0310 19:53:21.994139 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" gracePeriod=600 Mar 10 19:53:22 crc kubenswrapper[4861]: E0310 19:53:22.235928 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:53:22 crc kubenswrapper[4861]: I0310 19:53:22.361198 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" exitCode=0 Mar 10 19:53:22 crc kubenswrapper[4861]: I0310 19:53:22.361313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6"} Mar 10 19:53:22 crc kubenswrapper[4861]: I0310 19:53:22.361787 4861 scope.go:117] "RemoveContainer" containerID="92e222993ea7edea1c699227d967e623362b40785278c4bc6a03eaf762ab3615" Mar 10 19:53:22 crc kubenswrapper[4861]: I0310 19:53:22.362524 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:53:22 crc kubenswrapper[4861]: E0310 19:53:22.362972 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:53:36 crc kubenswrapper[4861]: I0310 19:53:36.967476 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:53:36 crc kubenswrapper[4861]: E0310 19:53:36.969011 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:53:47 crc kubenswrapper[4861]: I0310 19:53:47.957877 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:53:47 crc kubenswrapper[4861]: E0310 19:53:47.958868 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.156001 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552874-flgbz"] Mar 10 19:54:00 crc kubenswrapper[4861]: E0310 19:54:00.157016 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8734ed42-d043-4ac1-9282-fb905cd3cb36" containerName="oc" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.157038 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8734ed42-d043-4ac1-9282-fb905cd3cb36" containerName="oc" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.157296 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8734ed42-d043-4ac1-9282-fb905cd3cb36" containerName="oc" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.158012 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.160601 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.163822 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.163848 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.171536 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552874-flgbz"] Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.220924 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctcl\" (UniqueName: \"kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl\") pod \"auto-csr-approver-29552874-flgbz\" (UID: \"dfd3e944-6f7e-47aa-8767-86ccaf04e193\") " pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.322647 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctcl\" (UniqueName: \"kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl\") pod \"auto-csr-approver-29552874-flgbz\" (UID: \"dfd3e944-6f7e-47aa-8767-86ccaf04e193\") " pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.346507 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctcl\" (UniqueName: \"kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl\") pod \"auto-csr-approver-29552874-flgbz\" (UID: \"dfd3e944-6f7e-47aa-8767-86ccaf04e193\") " pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:00 crc kubenswrapper[4861]: I0310 19:54:00.487817 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:01 crc kubenswrapper[4861]: I0310 19:54:01.258022 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552874-flgbz"] Mar 10 19:54:01 crc kubenswrapper[4861]: W0310 19:54:01.269087 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfd3e944_6f7e_47aa_8767_86ccaf04e193.slice/crio-0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b WatchSource:0}: Error finding container 0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b: Status 404 returned error can't find the container with id 0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b Mar 10 19:54:01 crc kubenswrapper[4861]: I0310 19:54:01.751300 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552874-flgbz" event={"ID":"dfd3e944-6f7e-47aa-8767-86ccaf04e193","Type":"ContainerStarted","Data":"0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b"} Mar 10 19:54:01 crc kubenswrapper[4861]: I0310 19:54:01.958688 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:54:01 crc kubenswrapper[4861]: E0310 19:54:01.959685 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:03 crc kubenswrapper[4861]: I0310 19:54:03.770273 4861 generic.go:334] "Generic (PLEG): container finished" podID="dfd3e944-6f7e-47aa-8767-86ccaf04e193" containerID="3c5d9d624d54fd85ac885c670d6173a33fde09821c0d09c48d8f931fb132e6c1" exitCode=0 Mar 10 19:54:03 crc kubenswrapper[4861]: I0310 19:54:03.770390 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552874-flgbz" event={"ID":"dfd3e944-6f7e-47aa-8767-86ccaf04e193","Type":"ContainerDied","Data":"3c5d9d624d54fd85ac885c670d6173a33fde09821c0d09c48d8f931fb132e6c1"} Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.224770 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.304989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctcl\" (UniqueName: \"kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl\") pod \"dfd3e944-6f7e-47aa-8767-86ccaf04e193\" (UID: \"dfd3e944-6f7e-47aa-8767-86ccaf04e193\") " Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.318855 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl" (OuterVolumeSpecName: "kube-api-access-5ctcl") pod "dfd3e944-6f7e-47aa-8767-86ccaf04e193" (UID: "dfd3e944-6f7e-47aa-8767-86ccaf04e193"). InnerVolumeSpecName "kube-api-access-5ctcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.406456 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctcl\" (UniqueName: \"kubernetes.io/projected/dfd3e944-6f7e-47aa-8767-86ccaf04e193-kube-api-access-5ctcl\") on node \"crc\" DevicePath \"\"" Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.789264 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552874-flgbz" event={"ID":"dfd3e944-6f7e-47aa-8767-86ccaf04e193","Type":"ContainerDied","Data":"0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b"} Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.789324 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d15cd73488eae6c27938e9447c192c9814a0cacbb767d2935745d0c2b62aa6b" Mar 10 19:54:05 crc kubenswrapper[4861]: I0310 19:54:05.789397 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552874-flgbz" Mar 10 19:54:06 crc kubenswrapper[4861]: I0310 19:54:06.319599 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552868-lkm9p"] Mar 10 19:54:06 crc kubenswrapper[4861]: I0310 19:54:06.325969 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552868-lkm9p"] Mar 10 19:54:06 crc kubenswrapper[4861]: I0310 19:54:06.965194 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fb2abb-01ba-4944-b135-7e794cafb741" path="/var/lib/kubelet/pods/e8fb2abb-01ba-4944-b135-7e794cafb741/volumes" Mar 10 19:54:13 crc kubenswrapper[4861]: I0310 19:54:13.958531 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:54:13 crc kubenswrapper[4861]: E0310 19:54:13.959223 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:28 crc kubenswrapper[4861]: I0310 19:54:28.958014 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:54:28 crc kubenswrapper[4861]: E0310 19:54:28.959010 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:41 crc kubenswrapper[4861]: I0310 19:54:41.958955 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:54:41 crc kubenswrapper[4861]: E0310 19:54:41.959921 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:52 crc kubenswrapper[4861]: I0310 19:54:52.958301 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:54:52 crc kubenswrapper[4861]: E0310 19:54:52.959366 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:54:55 crc kubenswrapper[4861]: I0310 19:54:55.756502 4861 scope.go:117] "RemoveContainer" containerID="140a19a168d003baec9049ba97a1784c07b74e5c69cab59c769d08afb4574d41" Mar 10 19:55:07 crc kubenswrapper[4861]: I0310 19:55:07.959160 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:55:07 crc kubenswrapper[4861]: E0310 19:55:07.960611 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:55:19 crc kubenswrapper[4861]: I0310 19:55:19.958392 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:55:19 crc kubenswrapper[4861]: E0310 19:55:19.959377 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.883907 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:27 crc kubenswrapper[4861]: E0310 19:55:27.885222 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd3e944-6f7e-47aa-8767-86ccaf04e193" containerName="oc" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.885253 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd3e944-6f7e-47aa-8767-86ccaf04e193" containerName="oc" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.885611 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd3e944-6f7e-47aa-8767-86ccaf04e193" containerName="oc" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.887666 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.916410 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.917764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.918058 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:27 crc kubenswrapper[4861]: I0310 19:55:27.918109 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mf8t\" (UniqueName: \"kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.019130 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.019233 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.019260 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mf8t\" (UniqueName: \"kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.019906 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.019915 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.046400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mf8t\" (UniqueName: \"kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t\") pod \"certified-operators-sll2r\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.218087 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:28 crc kubenswrapper[4861]: I0310 19:55:28.677531 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:29 crc kubenswrapper[4861]: I0310 19:55:29.558745 4861 generic.go:334] "Generic (PLEG): container finished" podID="f147d295-39e7-4f72-b3e1-117570110550" containerID="5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121" exitCode=0 Mar 10 19:55:29 crc kubenswrapper[4861]: I0310 19:55:29.558838 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerDied","Data":"5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121"} Mar 10 19:55:29 crc kubenswrapper[4861]: I0310 19:55:29.559027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerStarted","Data":"9ab03040a623f22f96a3fbdc1f2bc95af0327f1ff33b8f13f3e3cf563ce1c974"} Mar 10 19:55:29 crc kubenswrapper[4861]: I0310 19:55:29.574517 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 19:55:31 crc kubenswrapper[4861]: I0310 19:55:31.596528 4861 generic.go:334] "Generic (PLEG): container finished" podID="f147d295-39e7-4f72-b3e1-117570110550" containerID="5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc" exitCode=0 Mar 10 19:55:31 crc kubenswrapper[4861]: I0310 19:55:31.596585 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerDied","Data":"5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc"} Mar 10 19:55:31 crc kubenswrapper[4861]: I0310 19:55:31.958062 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:55:31 crc kubenswrapper[4861]: E0310 19:55:31.958297 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:55:32 crc kubenswrapper[4861]: I0310 19:55:32.607983 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerStarted","Data":"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339"} Mar 10 19:55:32 crc kubenswrapper[4861]: I0310 19:55:32.632836 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sll2r" podStartSLOduration=3.149089032 podStartE2EDuration="5.632811298s" podCreationTimestamp="2026-03-10 19:55:27 +0000 UTC" firstStartedPulling="2026-03-10 19:55:29.574062081 +0000 UTC m=+4073.337498091" lastFinishedPulling="2026-03-10 19:55:32.057784357 +0000 UTC m=+4075.821220357" observedRunningTime="2026-03-10 19:55:32.630898396 +0000 UTC m=+4076.394334396" watchObservedRunningTime="2026-03-10 19:55:32.632811298 +0000 UTC m=+4076.396247288" Mar 10 19:55:38 crc kubenswrapper[4861]: I0310 19:55:38.219385 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:38 crc kubenswrapper[4861]: I0310 19:55:38.220329 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:38 crc kubenswrapper[4861]: I0310 19:55:38.298255 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:38 crc kubenswrapper[4861]: I0310 19:55:38.735164 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:38 crc kubenswrapper[4861]: I0310 19:55:38.805809 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:40 crc kubenswrapper[4861]: I0310 19:55:40.680901 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sll2r" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="registry-server" containerID="cri-o://2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339" gracePeriod=2 Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.367655 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.457092 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities\") pod \"f147d295-39e7-4f72-b3e1-117570110550\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.457159 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content\") pod \"f147d295-39e7-4f72-b3e1-117570110550\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.457245 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mf8t\" (UniqueName: \"kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t\") pod \"f147d295-39e7-4f72-b3e1-117570110550\" (UID: \"f147d295-39e7-4f72-b3e1-117570110550\") " Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.458569 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities" (OuterVolumeSpecName: "utilities") pod "f147d295-39e7-4f72-b3e1-117570110550" (UID: "f147d295-39e7-4f72-b3e1-117570110550"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.466128 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t" (OuterVolumeSpecName: "kube-api-access-9mf8t") pod "f147d295-39e7-4f72-b3e1-117570110550" (UID: "f147d295-39e7-4f72-b3e1-117570110550"). InnerVolumeSpecName "kube-api-access-9mf8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.559452 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.559805 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mf8t\" (UniqueName: \"kubernetes.io/projected/f147d295-39e7-4f72-b3e1-117570110550-kube-api-access-9mf8t\") on node \"crc\" DevicePath \"\"" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.693267 4861 generic.go:334] "Generic (PLEG): container finished" podID="f147d295-39e7-4f72-b3e1-117570110550" containerID="2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339" exitCode=0 Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.693308 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerDied","Data":"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339"} Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.693370 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sll2r" event={"ID":"f147d295-39e7-4f72-b3e1-117570110550","Type":"ContainerDied","Data":"9ab03040a623f22f96a3fbdc1f2bc95af0327f1ff33b8f13f3e3cf563ce1c974"} Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.693394 4861 scope.go:117] "RemoveContainer" containerID="2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.693409 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sll2r" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.722462 4861 scope.go:117] "RemoveContainer" containerID="5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.750207 4861 scope.go:117] "RemoveContainer" containerID="5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.776952 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f147d295-39e7-4f72-b3e1-117570110550" (UID: "f147d295-39e7-4f72-b3e1-117570110550"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.788011 4861 scope.go:117] "RemoveContainer" containerID="2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339" Mar 10 19:55:41 crc kubenswrapper[4861]: E0310 19:55:41.788604 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339\": container with ID starting with 2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339 not found: ID does not exist" containerID="2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.788669 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339"} err="failed to get container status \"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339\": rpc error: code = NotFound desc = could not find container \"2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339\": container with ID starting with 2feac79bdebf100c7a61f04fd411dbe1626bbb5a720972b6a92cc5c2b0c9e339 not found: ID does not exist" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.788748 4861 scope.go:117] "RemoveContainer" containerID="5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc" Mar 10 19:55:41 crc kubenswrapper[4861]: E0310 19:55:41.789334 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc\": container with ID starting with 5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc not found: ID does not exist" containerID="5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.789408 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc"} err="failed to get container status \"5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc\": rpc error: code = NotFound desc = could not find container \"5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc\": container with ID starting with 5852fa71fa4148372ad829aa9dcb3b410fd4f2370c7d8762cd72835537a424fc not found: ID does not exist" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.789463 4861 scope.go:117] "RemoveContainer" containerID="5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121" Mar 10 19:55:41 crc kubenswrapper[4861]: E0310 19:55:41.789958 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121\": container with ID starting with 5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121 not found: ID does not exist" containerID="5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.790008 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121"} err="failed to get container status \"5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121\": rpc error: code = NotFound desc = could not find container \"5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121\": container with ID starting with 5ed5d1d5a81ceec52c0510d5e4b6d8d8f67fe629acc0eb7d8887fc0cc7c5e121 not found: ID does not exist" Mar 10 19:55:41 crc kubenswrapper[4861]: I0310 19:55:41.867303 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147d295-39e7-4f72-b3e1-117570110550-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:55:42 crc kubenswrapper[4861]: I0310 19:55:42.050989 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:42 crc kubenswrapper[4861]: I0310 19:55:42.064970 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sll2r"] Mar 10 19:55:42 crc kubenswrapper[4861]: I0310 19:55:42.973137 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f147d295-39e7-4f72-b3e1-117570110550" path="/var/lib/kubelet/pods/f147d295-39e7-4f72-b3e1-117570110550/volumes" Mar 10 19:55:45 crc kubenswrapper[4861]: I0310 19:55:45.958023 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:55:45 crc kubenswrapper[4861]: E0310 19:55:45.959072 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:55:59 crc kubenswrapper[4861]: I0310 19:55:59.958341 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:55:59 crc kubenswrapper[4861]: E0310 19:55:59.960182 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.166265 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552876-zxgkd"] Mar 10 19:56:00 crc kubenswrapper[4861]: E0310 19:56:00.166742 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="extract-utilities" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.166772 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="extract-utilities" Mar 10 19:56:00 crc kubenswrapper[4861]: E0310 19:56:00.166804 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="registry-server" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.166817 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="registry-server" Mar 10 19:56:00 crc kubenswrapper[4861]: E0310 19:56:00.166842 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="extract-content" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.166855 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="extract-content" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.167098 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f147d295-39e7-4f72-b3e1-117570110550" containerName="registry-server" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.167863 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.172191 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.172796 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.173004 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.187302 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552876-zxgkd"] Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.312677 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv72l\" (UniqueName: \"kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l\") pod \"auto-csr-approver-29552876-zxgkd\" (UID: \"321502cb-96d6-431e-8cf9-05fcea2fb723\") " pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.414261 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv72l\" (UniqueName: \"kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l\") pod \"auto-csr-approver-29552876-zxgkd\" (UID: \"321502cb-96d6-431e-8cf9-05fcea2fb723\") " pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.439043 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv72l\" (UniqueName: \"kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l\") pod \"auto-csr-approver-29552876-zxgkd\" (UID: \"321502cb-96d6-431e-8cf9-05fcea2fb723\") " pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.496469 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:00 crc kubenswrapper[4861]: W0310 19:56:00.987688 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod321502cb_96d6_431e_8cf9_05fcea2fb723.slice/crio-d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d WatchSource:0}: Error finding container d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d: Status 404 returned error can't find the container with id d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d Mar 10 19:56:00 crc kubenswrapper[4861]: I0310 19:56:00.989230 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552876-zxgkd"] Mar 10 19:56:01 crc kubenswrapper[4861]: I0310 19:56:01.903603 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" event={"ID":"321502cb-96d6-431e-8cf9-05fcea2fb723","Type":"ContainerStarted","Data":"d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d"} Mar 10 19:56:03 crc kubenswrapper[4861]: I0310 19:56:03.930140 4861 generic.go:334] "Generic (PLEG): container finished" podID="321502cb-96d6-431e-8cf9-05fcea2fb723" containerID="e3f6a2ffb900648175a933cc513817f58a16c851f19ce55592c0af4123197b32" exitCode=0 Mar 10 19:56:03 crc kubenswrapper[4861]: I0310 19:56:03.930236 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" event={"ID":"321502cb-96d6-431e-8cf9-05fcea2fb723","Type":"ContainerDied","Data":"e3f6a2ffb900648175a933cc513817f58a16c851f19ce55592c0af4123197b32"} Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.397358 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.511930 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv72l\" (UniqueName: \"kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l\") pod \"321502cb-96d6-431e-8cf9-05fcea2fb723\" (UID: \"321502cb-96d6-431e-8cf9-05fcea2fb723\") " Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.520092 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l" (OuterVolumeSpecName: "kube-api-access-zv72l") pod "321502cb-96d6-431e-8cf9-05fcea2fb723" (UID: "321502cb-96d6-431e-8cf9-05fcea2fb723"). InnerVolumeSpecName "kube-api-access-zv72l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.613862 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv72l\" (UniqueName: \"kubernetes.io/projected/321502cb-96d6-431e-8cf9-05fcea2fb723-kube-api-access-zv72l\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.955467 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" event={"ID":"321502cb-96d6-431e-8cf9-05fcea2fb723","Type":"ContainerDied","Data":"d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d"} Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.955907 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d626a21d118c8b524ead6b8b09a84a37c12beeb58d9b5fd47d609e1e6555a25d" Mar 10 19:56:05 crc kubenswrapper[4861]: I0310 19:56:05.955625 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552876-zxgkd" Mar 10 19:56:06 crc kubenswrapper[4861]: I0310 19:56:06.499027 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552870-dk2km"] Mar 10 19:56:06 crc kubenswrapper[4861]: I0310 19:56:06.508960 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552870-dk2km"] Mar 10 19:56:06 crc kubenswrapper[4861]: I0310 19:56:06.978676 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc94b081-9b7e-497f-979b-d38019f9f99d" path="/var/lib/kubelet/pods/cc94b081-9b7e-497f-979b-d38019f9f99d/volumes" Mar 10 19:56:11 crc kubenswrapper[4861]: I0310 19:56:11.960011 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:56:11 crc kubenswrapper[4861]: E0310 19:56:11.960814 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:56:25 crc kubenswrapper[4861]: I0310 19:56:25.958432 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:56:25 crc kubenswrapper[4861]: E0310 19:56:25.959384 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.328884 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:28 crc kubenswrapper[4861]: E0310 19:56:28.329468 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321502cb-96d6-431e-8cf9-05fcea2fb723" containerName="oc" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.329480 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="321502cb-96d6-431e-8cf9-05fcea2fb723" containerName="oc" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.329619 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="321502cb-96d6-431e-8cf9-05fcea2fb723" containerName="oc" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.330770 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.347574 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.414969 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl9g\" (UniqueName: \"kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.415047 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.415107 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.517926 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jl9g\" (UniqueName: \"kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.518104 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.519065 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.519403 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.520113 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.552934 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jl9g\" (UniqueName: \"kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g\") pod \"community-operators-bxfr2\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:28 crc kubenswrapper[4861]: I0310 19:56:28.666641 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:29 crc kubenswrapper[4861]: I0310 19:56:29.104486 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:29 crc kubenswrapper[4861]: I0310 19:56:29.191621 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerStarted","Data":"bba496af1276a35232d08c4df0b5d04834f026818dd8746120ba78f585d4c4b5"} Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.201929 4861 generic.go:334] "Generic (PLEG): container finished" podID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerID="979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529" exitCode=0 Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.201992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerDied","Data":"979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529"} Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.529024 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.532239 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.553908 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.657080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.657264 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ckg\" (UniqueName: \"kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.657568 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.759188 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.759316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.759396 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ckg\" (UniqueName: \"kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.760143 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.760406 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.797667 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ckg\" (UniqueName: \"kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg\") pod \"redhat-operators-b9lt2\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:30 crc kubenswrapper[4861]: I0310 19:56:30.876046 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:31 crc kubenswrapper[4861]: I0310 19:56:31.208933 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerStarted","Data":"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4"} Mar 10 19:56:31 crc kubenswrapper[4861]: I0310 19:56:31.418431 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:32 crc kubenswrapper[4861]: I0310 19:56:32.220199 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerID="c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728" exitCode=0 Mar 10 19:56:32 crc kubenswrapper[4861]: I0310 19:56:32.220274 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerDied","Data":"c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728"} Mar 10 19:56:32 crc kubenswrapper[4861]: I0310 19:56:32.220304 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerStarted","Data":"37e56b1b58cbe028549f3be07aca639ef626139249457c52c7d5542a276f22be"} Mar 10 19:56:32 crc kubenswrapper[4861]: I0310 19:56:32.222555 4861 generic.go:334] "Generic (PLEG): container finished" podID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerID="1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4" exitCode=0 Mar 10 19:56:32 crc kubenswrapper[4861]: I0310 19:56:32.222600 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerDied","Data":"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4"} Mar 10 19:56:33 crc kubenswrapper[4861]: I0310 19:56:33.238220 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerStarted","Data":"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6"} Mar 10 19:56:33 crc kubenswrapper[4861]: I0310 19:56:33.285205 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bxfr2" podStartSLOduration=2.751558558 podStartE2EDuration="5.285171166s" podCreationTimestamp="2026-03-10 19:56:28 +0000 UTC" firstStartedPulling="2026-03-10 19:56:30.20443475 +0000 UTC m=+4133.967870750" lastFinishedPulling="2026-03-10 19:56:32.738047358 +0000 UTC m=+4136.501483358" observedRunningTime="2026-03-10 19:56:33.267534975 +0000 UTC m=+4137.030970985" watchObservedRunningTime="2026-03-10 19:56:33.285171166 +0000 UTC m=+4137.048607176" Mar 10 19:56:34 crc kubenswrapper[4861]: I0310 19:56:34.262851 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerID="f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c" exitCode=0 Mar 10 19:56:34 crc kubenswrapper[4861]: I0310 19:56:34.264214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerDied","Data":"f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c"} Mar 10 19:56:35 crc kubenswrapper[4861]: I0310 19:56:35.273189 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerStarted","Data":"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722"} Mar 10 19:56:35 crc kubenswrapper[4861]: I0310 19:56:35.300732 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9lt2" podStartSLOduration=2.8547819150000002 podStartE2EDuration="5.300700169s" podCreationTimestamp="2026-03-10 19:56:30 +0000 UTC" firstStartedPulling="2026-03-10 19:56:32.222622564 +0000 UTC m=+4135.986058554" lastFinishedPulling="2026-03-10 19:56:34.668540808 +0000 UTC m=+4138.431976808" observedRunningTime="2026-03-10 19:56:35.295125947 +0000 UTC m=+4139.058561927" watchObservedRunningTime="2026-03-10 19:56:35.300700169 +0000 UTC m=+4139.064136129" Mar 10 19:56:38 crc kubenswrapper[4861]: I0310 19:56:38.667178 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:38 crc kubenswrapper[4861]: I0310 19:56:38.667766 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:38 crc kubenswrapper[4861]: I0310 19:56:38.736154 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:39 crc kubenswrapper[4861]: I0310 19:56:39.380004 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:39 crc kubenswrapper[4861]: I0310 19:56:39.508337 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:39 crc kubenswrapper[4861]: I0310 19:56:39.957835 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:56:39 crc kubenswrapper[4861]: E0310 19:56:39.958151 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:56:40 crc kubenswrapper[4861]: I0310 19:56:40.876180 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:40 crc kubenswrapper[4861]: I0310 19:56:40.876598 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.325877 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bxfr2" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="registry-server" containerID="cri-o://ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6" gracePeriod=2 Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.786789 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.823414 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content\") pod \"66de1dc9-fa95-4aef-9090-63840d2e0a07\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.823520 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jl9g\" (UniqueName: \"kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g\") pod \"66de1dc9-fa95-4aef-9090-63840d2e0a07\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.823612 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities\") pod \"66de1dc9-fa95-4aef-9090-63840d2e0a07\" (UID: \"66de1dc9-fa95-4aef-9090-63840d2e0a07\") " Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.825421 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities" (OuterVolumeSpecName: "utilities") pod "66de1dc9-fa95-4aef-9090-63840d2e0a07" (UID: "66de1dc9-fa95-4aef-9090-63840d2e0a07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.830813 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g" (OuterVolumeSpecName: "kube-api-access-9jl9g") pod "66de1dc9-fa95-4aef-9090-63840d2e0a07" (UID: "66de1dc9-fa95-4aef-9090-63840d2e0a07"). InnerVolumeSpecName "kube-api-access-9jl9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.925655 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jl9g\" (UniqueName: \"kubernetes.io/projected/66de1dc9-fa95-4aef-9090-63840d2e0a07-kube-api-access-9jl9g\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.925691 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.925966 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66de1dc9-fa95-4aef-9090-63840d2e0a07" (UID: "66de1dc9-fa95-4aef-9090-63840d2e0a07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:56:41 crc kubenswrapper[4861]: I0310 19:56:41.949155 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9lt2" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="registry-server" probeResult="failure" output=< Mar 10 19:56:41 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 19:56:41 crc kubenswrapper[4861]: > Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.028951 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66de1dc9-fa95-4aef-9090-63840d2e0a07-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.339352 4861 generic.go:334] "Generic (PLEG): container finished" podID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerID="ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6" exitCode=0 Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.339429 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerDied","Data":"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6"} Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.339481 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bxfr2" event={"ID":"66de1dc9-fa95-4aef-9090-63840d2e0a07","Type":"ContainerDied","Data":"bba496af1276a35232d08c4df0b5d04834f026818dd8746120ba78f585d4c4b5"} Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.339517 4861 scope.go:117] "RemoveContainer" containerID="ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.339788 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bxfr2" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.372806 4861 scope.go:117] "RemoveContainer" containerID="1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.405464 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.415569 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bxfr2"] Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.423870 4861 scope.go:117] "RemoveContainer" containerID="979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.451870 4861 scope.go:117] "RemoveContainer" containerID="ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6" Mar 10 19:56:42 crc kubenswrapper[4861]: E0310 19:56:42.453252 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6\": container with ID starting with ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6 not found: ID does not exist" containerID="ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.453410 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6"} err="failed to get container status \"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6\": rpc error: code = NotFound desc = could not find container \"ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6\": container with ID starting with ca5704e69869017d3ea262ecb04b0a94ba4c7a794738d5a996da7cc6cdc610f6 not found: ID does not exist" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.453549 4861 scope.go:117] "RemoveContainer" containerID="1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4" Mar 10 19:56:42 crc kubenswrapper[4861]: E0310 19:56:42.454142 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4\": container with ID starting with 1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4 not found: ID does not exist" containerID="1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.454213 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4"} err="failed to get container status \"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4\": rpc error: code = NotFound desc = could not find container \"1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4\": container with ID starting with 1be093ad186bbee1bc493bd8b88a1b842461ed49a072eddfcaf1b3cfdfae04e4 not found: ID does not exist" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.454255 4861 scope.go:117] "RemoveContainer" containerID="979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529" Mar 10 19:56:42 crc kubenswrapper[4861]: E0310 19:56:42.454737 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529\": container with ID starting with 979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529 not found: ID does not exist" containerID="979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.454962 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529"} err="failed to get container status \"979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529\": rpc error: code = NotFound desc = could not find container \"979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529\": container with ID starting with 979dd88e5a9644c28a3a25e9cdae2b0ffd582b0076c6e8778a32d7f793ead529 not found: ID does not exist" Mar 10 19:56:42 crc kubenswrapper[4861]: I0310 19:56:42.973670 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" path="/var/lib/kubelet/pods/66de1dc9-fa95-4aef-9090-63840d2e0a07/volumes" Mar 10 19:56:50 crc kubenswrapper[4861]: I0310 19:56:50.956348 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:51 crc kubenswrapper[4861]: I0310 19:56:51.050138 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:51 crc kubenswrapper[4861]: I0310 19:56:51.209772 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:52 crc kubenswrapper[4861]: I0310 19:56:52.435451 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b9lt2" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="registry-server" containerID="cri-o://66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722" gracePeriod=2 Mar 10 19:56:52 crc kubenswrapper[4861]: I0310 19:56:52.933688 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:52 crc kubenswrapper[4861]: I0310 19:56:52.958658 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:56:52 crc kubenswrapper[4861]: E0310 19:56:52.959060 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.032183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities\") pod \"5d8454ef-69b5-4a76-92ec-2334f04500a9\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.032334 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ckg\" (UniqueName: \"kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg\") pod \"5d8454ef-69b5-4a76-92ec-2334f04500a9\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.032503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content\") pod \"5d8454ef-69b5-4a76-92ec-2334f04500a9\" (UID: \"5d8454ef-69b5-4a76-92ec-2334f04500a9\") " Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.033907 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities" (OuterVolumeSpecName: "utilities") pod "5d8454ef-69b5-4a76-92ec-2334f04500a9" (UID: "5d8454ef-69b5-4a76-92ec-2334f04500a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.044899 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg" (OuterVolumeSpecName: "kube-api-access-t5ckg") pod "5d8454ef-69b5-4a76-92ec-2334f04500a9" (UID: "5d8454ef-69b5-4a76-92ec-2334f04500a9"). InnerVolumeSpecName "kube-api-access-t5ckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.134378 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.134432 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ckg\" (UniqueName: \"kubernetes.io/projected/5d8454ef-69b5-4a76-92ec-2334f04500a9-kube-api-access-t5ckg\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.170058 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d8454ef-69b5-4a76-92ec-2334f04500a9" (UID: "5d8454ef-69b5-4a76-92ec-2334f04500a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.236739 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d8454ef-69b5-4a76-92ec-2334f04500a9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.447414 4861 generic.go:334] "Generic (PLEG): container finished" podID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerID="66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722" exitCode=0 Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.447484 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerDied","Data":"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722"} Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.447527 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9lt2" event={"ID":"5d8454ef-69b5-4a76-92ec-2334f04500a9","Type":"ContainerDied","Data":"37e56b1b58cbe028549f3be07aca639ef626139249457c52c7d5542a276f22be"} Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.447563 4861 scope.go:117] "RemoveContainer" containerID="66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.447804 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9lt2" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.474601 4861 scope.go:117] "RemoveContainer" containerID="f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.506958 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.514476 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b9lt2"] Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.520052 4861 scope.go:117] "RemoveContainer" containerID="c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.558811 4861 scope.go:117] "RemoveContainer" containerID="66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722" Mar 10 19:56:53 crc kubenswrapper[4861]: E0310 19:56:53.559329 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722\": container with ID starting with 66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722 not found: ID does not exist" containerID="66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.559393 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722"} err="failed to get container status \"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722\": rpc error: code = NotFound desc = could not find container \"66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722\": container with ID starting with 66cb6a3fc392a3070d183949bd88f2d5ec9c1fa37f27314f5cc77e8375971722 not found: ID does not exist" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.559435 4861 scope.go:117] "RemoveContainer" containerID="f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c" Mar 10 19:56:53 crc kubenswrapper[4861]: E0310 19:56:53.560042 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c\": container with ID starting with f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c not found: ID does not exist" containerID="f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.560079 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c"} err="failed to get container status \"f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c\": rpc error: code = NotFound desc = could not find container \"f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c\": container with ID starting with f8618883abc0a526020efb460476b83a6c9bab327fb2dba945a05c762b1f5f4c not found: ID does not exist" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.560114 4861 scope.go:117] "RemoveContainer" containerID="c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728" Mar 10 19:56:53 crc kubenswrapper[4861]: E0310 19:56:53.560693 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728\": container with ID starting with c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728 not found: ID does not exist" containerID="c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728" Mar 10 19:56:53 crc kubenswrapper[4861]: I0310 19:56:53.560784 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728"} err="failed to get container status \"c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728\": rpc error: code = NotFound desc = could not find container \"c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728\": container with ID starting with c1d44c9f5865d6f11f2b54bae7013df45f84b2cfe8e94660ced8b8e046702728 not found: ID does not exist" Mar 10 19:56:54 crc kubenswrapper[4861]: I0310 19:56:54.972001 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" path="/var/lib/kubelet/pods/5d8454ef-69b5-4a76-92ec-2334f04500a9/volumes" Mar 10 19:56:56 crc kubenswrapper[4861]: I0310 19:56:56.019994 4861 scope.go:117] "RemoveContainer" containerID="0c538a4c7f039341c2b1dfa08002c54f10c334d50aab4fb180bd3e37a48e4dab" Mar 10 19:57:07 crc kubenswrapper[4861]: I0310 19:57:07.958674 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:57:07 crc kubenswrapper[4861]: E0310 19:57:07.959882 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:57:20 crc kubenswrapper[4861]: I0310 19:57:20.958901 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:57:20 crc kubenswrapper[4861]: E0310 19:57:20.961538 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.952472 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.952898 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.952928 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.952954 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.952967 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.952986 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="extract-utilities" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953000 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="extract-utilities" Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.953022 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="extract-content" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953034 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="extract-content" Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.953059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="extract-content" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953071 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="extract-content" Mar 10 19:57:21 crc kubenswrapper[4861]: E0310 19:57:21.953096 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="extract-utilities" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953108 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="extract-utilities" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953341 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8454ef-69b5-4a76-92ec-2334f04500a9" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.953373 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="66de1dc9-fa95-4aef-9090-63840d2e0a07" containerName="registry-server" Mar 10 19:57:21 crc kubenswrapper[4861]: I0310 19:57:21.955045 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.008829 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.113455 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4m5r\" (UniqueName: \"kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.113563 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.113621 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.217143 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4m5r\" (UniqueName: \"kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.217579 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.217648 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.218235 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.218461 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.242508 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4m5r\" (UniqueName: \"kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r\") pod \"redhat-marketplace-jsjjx\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.301153 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:22 crc kubenswrapper[4861]: I0310 19:57:22.783137 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:23 crc kubenswrapper[4861]: I0310 19:57:23.710433 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerID="7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775" exitCode=0 Mar 10 19:57:23 crc kubenswrapper[4861]: I0310 19:57:23.710506 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerDied","Data":"7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775"} Mar 10 19:57:23 crc kubenswrapper[4861]: I0310 19:57:23.710618 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerStarted","Data":"a8a7c988cc51f6d111c43fa8410d36d240d47053e776bd6616f8b1e7b0aceb60"} Mar 10 19:57:25 crc kubenswrapper[4861]: I0310 19:57:25.735126 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerID="1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7" exitCode=0 Mar 10 19:57:25 crc kubenswrapper[4861]: I0310 19:57:25.735239 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerDied","Data":"1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7"} Mar 10 19:57:26 crc kubenswrapper[4861]: I0310 19:57:26.749115 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerStarted","Data":"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5"} Mar 10 19:57:26 crc kubenswrapper[4861]: I0310 19:57:26.779925 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jsjjx" podStartSLOduration=3.345133451 podStartE2EDuration="5.779895341s" podCreationTimestamp="2026-03-10 19:57:21 +0000 UTC" firstStartedPulling="2026-03-10 19:57:23.713629029 +0000 UTC m=+4187.477064999" lastFinishedPulling="2026-03-10 19:57:26.148390899 +0000 UTC m=+4189.911826889" observedRunningTime="2026-03-10 19:57:26.776404546 +0000 UTC m=+4190.539840576" watchObservedRunningTime="2026-03-10 19:57:26.779895341 +0000 UTC m=+4190.543331351" Mar 10 19:57:32 crc kubenswrapper[4861]: I0310 19:57:32.301957 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:32 crc kubenswrapper[4861]: I0310 19:57:32.306012 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:32 crc kubenswrapper[4861]: I0310 19:57:32.488405 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:32 crc kubenswrapper[4861]: I0310 19:57:32.881859 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:32 crc kubenswrapper[4861]: I0310 19:57:32.954312 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:34 crc kubenswrapper[4861]: I0310 19:57:34.823368 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jsjjx" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="registry-server" containerID="cri-o://d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5" gracePeriod=2 Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.339231 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.443658 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities\") pod \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.443785 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content\") pod \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.443917 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4m5r\" (UniqueName: \"kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r\") pod \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\" (UID: \"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce\") " Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.444861 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities" (OuterVolumeSpecName: "utilities") pod "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" (UID: "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.449482 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r" (OuterVolumeSpecName: "kube-api-access-t4m5r") pod "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" (UID: "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce"). InnerVolumeSpecName "kube-api-access-t4m5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.545536 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.545592 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4m5r\" (UniqueName: \"kubernetes.io/projected/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-kube-api-access-t4m5r\") on node \"crc\" DevicePath \"\"" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.630759 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" (UID: "3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.647404 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.833882 4861 generic.go:334] "Generic (PLEG): container finished" podID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerID="d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5" exitCode=0 Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.833941 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerDied","Data":"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5"} Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.834004 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsjjx" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.834036 4861 scope.go:117] "RemoveContainer" containerID="d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.834017 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsjjx" event={"ID":"3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce","Type":"ContainerDied","Data":"a8a7c988cc51f6d111c43fa8410d36d240d47053e776bd6616f8b1e7b0aceb60"} Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.870539 4861 scope.go:117] "RemoveContainer" containerID="1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.900031 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.907798 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsjjx"] Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.941329 4861 scope.go:117] "RemoveContainer" containerID="7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.958934 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:57:35 crc kubenswrapper[4861]: E0310 19:57:35.959365 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.973484 4861 scope.go:117] "RemoveContainer" containerID="d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5" Mar 10 19:57:35 crc kubenswrapper[4861]: E0310 19:57:35.974228 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5\": container with ID starting with d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5 not found: ID does not exist" containerID="d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.974276 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5"} err="failed to get container status \"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5\": rpc error: code = NotFound desc = could not find container \"d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5\": container with ID starting with d9f360fe457ea3e61654ce1389b1951ee41bca1cd3f53fd7a36c323e7a80a4d5 not found: ID does not exist" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.974308 4861 scope.go:117] "RemoveContainer" containerID="1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7" Mar 10 19:57:35 crc kubenswrapper[4861]: E0310 19:57:35.975082 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7\": container with ID starting with 1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7 not found: ID does not exist" containerID="1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.975227 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7"} err="failed to get container status \"1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7\": rpc error: code = NotFound desc = could not find container \"1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7\": container with ID starting with 1be4039e6dc36c16eed383bf503155ce483f46650fa73ee6d4a6cd8191a4dfe7 not found: ID does not exist" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.975341 4861 scope.go:117] "RemoveContainer" containerID="7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775" Mar 10 19:57:35 crc kubenswrapper[4861]: E0310 19:57:35.975888 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775\": container with ID starting with 7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775 not found: ID does not exist" containerID="7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775" Mar 10 19:57:35 crc kubenswrapper[4861]: I0310 19:57:35.976003 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775"} err="failed to get container status \"7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775\": rpc error: code = NotFound desc = could not find container \"7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775\": container with ID starting with 7d74a4522056d3fa5ff72c76e2b6a1b5f103531faf5454005edc7dbe526e8775 not found: ID does not exist" Mar 10 19:57:36 crc kubenswrapper[4861]: I0310 19:57:36.996212 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" path="/var/lib/kubelet/pods/3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce/volumes" Mar 10 19:57:49 crc kubenswrapper[4861]: I0310 19:57:49.958433 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:57:49 crc kubenswrapper[4861]: E0310 19:57:49.959297 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.162472 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552878-jrhbh"] Mar 10 19:58:00 crc kubenswrapper[4861]: E0310 19:58:00.163513 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="registry-server" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.163534 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="registry-server" Mar 10 19:58:00 crc kubenswrapper[4861]: E0310 19:58:00.163565 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="extract-content" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.163578 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="extract-content" Mar 10 19:58:00 crc kubenswrapper[4861]: E0310 19:58:00.163609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="extract-utilities" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.163622 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="extract-utilities" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.163889 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8aaddb-8e1f-4c8a-9169-f2a38bbff0ce" containerName="registry-server" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.164693 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.168628 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.168665 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.175995 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.176572 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552878-jrhbh"] Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.256301 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsv6\" (UniqueName: \"kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6\") pod \"auto-csr-approver-29552878-jrhbh\" (UID: \"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70\") " pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.357791 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsv6\" (UniqueName: \"kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6\") pod \"auto-csr-approver-29552878-jrhbh\" (UID: \"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70\") " pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.396581 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsv6\" (UniqueName: \"kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6\") pod \"auto-csr-approver-29552878-jrhbh\" (UID: \"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70\") " pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.499224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:00 crc kubenswrapper[4861]: I0310 19:58:00.806551 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552878-jrhbh"] Mar 10 19:58:01 crc kubenswrapper[4861]: I0310 19:58:01.105852 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" event={"ID":"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70","Type":"ContainerStarted","Data":"88b5d3c666cc9aeecdd15e7814120697b4adb8c60d7b9ad604a508d70e8f4c8b"} Mar 10 19:58:04 crc kubenswrapper[4861]: I0310 19:58:04.136816 4861 generic.go:334] "Generic (PLEG): container finished" podID="ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" containerID="a46ff1bd27bd4f2916fd340f4943cdac6f2751b721ae5bac5e04b069f96754ae" exitCode=0 Mar 10 19:58:04 crc kubenswrapper[4861]: I0310 19:58:04.136908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" event={"ID":"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70","Type":"ContainerDied","Data":"a46ff1bd27bd4f2916fd340f4943cdac6f2751b721ae5bac5e04b069f96754ae"} Mar 10 19:58:04 crc kubenswrapper[4861]: I0310 19:58:04.959282 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:58:04 crc kubenswrapper[4861]: E0310 19:58:04.959663 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:58:05 crc kubenswrapper[4861]: I0310 19:58:05.520950 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:05 crc kubenswrapper[4861]: I0310 19:58:05.655839 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtsv6\" (UniqueName: \"kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6\") pod \"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70\" (UID: \"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70\") " Mar 10 19:58:05 crc kubenswrapper[4861]: I0310 19:58:05.664754 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6" (OuterVolumeSpecName: "kube-api-access-mtsv6") pod "ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" (UID: "ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70"). InnerVolumeSpecName "kube-api-access-mtsv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 19:58:05 crc kubenswrapper[4861]: I0310 19:58:05.757285 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtsv6\" (UniqueName: \"kubernetes.io/projected/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70-kube-api-access-mtsv6\") on node \"crc\" DevicePath \"\"" Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.157975 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" event={"ID":"ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70","Type":"ContainerDied","Data":"88b5d3c666cc9aeecdd15e7814120697b4adb8c60d7b9ad604a508d70e8f4c8b"} Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.158031 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b5d3c666cc9aeecdd15e7814120697b4adb8c60d7b9ad604a508d70e8f4c8b" Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.158071 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552878-jrhbh" Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.614291 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552872-qrbnr"] Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.623297 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552872-qrbnr"] Mar 10 19:58:06 crc kubenswrapper[4861]: I0310 19:58:06.971596 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8734ed42-d043-4ac1-9282-fb905cd3cb36" path="/var/lib/kubelet/pods/8734ed42-d043-4ac1-9282-fb905cd3cb36/volumes" Mar 10 19:58:15 crc kubenswrapper[4861]: I0310 19:58:15.958239 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:58:15 crc kubenswrapper[4861]: E0310 19:58:15.959276 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 19:58:27 crc kubenswrapper[4861]: I0310 19:58:27.957933 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 19:58:29 crc kubenswrapper[4861]: I0310 19:58:29.398132 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6"} Mar 10 19:58:56 crc kubenswrapper[4861]: I0310 19:58:56.154303 4861 scope.go:117] "RemoveContainer" containerID="afa016784def1cc5cbbff78d2022f0924e1cb2809276ead1074ee27413cd03db" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.165905 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552880-kvbr4"] Mar 10 20:00:00 crc kubenswrapper[4861]: E0310 20:00:00.167001 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" containerName="oc" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.167023 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" containerName="oc" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.167269 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" containerName="oc" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.168061 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.171694 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.172180 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.172348 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.175801 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff"] Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.177558 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.181362 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.187790 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff"] Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.188510 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzn78\" (UniqueName: \"kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78\") pod \"auto-csr-approver-29552880-kvbr4\" (UID: \"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b\") " pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.194476 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552880-kvbr4"] Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.198597 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.290368 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.290468 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxz5\" (UniqueName: \"kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.290541 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzn78\" (UniqueName: \"kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78\") pod \"auto-csr-approver-29552880-kvbr4\" (UID: \"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b\") " pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.290995 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.324213 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzn78\" (UniqueName: \"kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78\") pod \"auto-csr-approver-29552880-kvbr4\" (UID: \"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b\") " pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.392841 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.392924 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.392981 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxz5\" (UniqueName: \"kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.395416 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.398330 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.425815 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxz5\" (UniqueName: \"kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5\") pod \"collect-profiles-29552880-8xhff\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.513976 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.525626 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:00 crc kubenswrapper[4861]: I0310 20:00:00.804675 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552880-kvbr4"] Mar 10 20:00:00 crc kubenswrapper[4861]: W0310 20:00:00.817335 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aac7e52_0d3c_4c33_a7f2_7dbce3d9714b.slice/crio-9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3 WatchSource:0}: Error finding container 9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3: Status 404 returned error can't find the container with id 9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3 Mar 10 20:00:01 crc kubenswrapper[4861]: W0310 20:00:01.088988 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96c2d30c_48a2_452c_b8f7_ba3fa176c801.slice/crio-aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8 WatchSource:0}: Error finding container aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8: Status 404 returned error can't find the container with id aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8 Mar 10 20:00:01 crc kubenswrapper[4861]: I0310 20:00:01.091092 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff"] Mar 10 20:00:01 crc kubenswrapper[4861]: I0310 20:00:01.224342 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" event={"ID":"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b","Type":"ContainerStarted","Data":"9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3"} Mar 10 20:00:01 crc kubenswrapper[4861]: I0310 20:00:01.226833 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" event={"ID":"96c2d30c-48a2-452c-b8f7-ba3fa176c801","Type":"ContainerStarted","Data":"aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8"} Mar 10 20:00:02 crc kubenswrapper[4861]: I0310 20:00:02.239012 4861 generic.go:334] "Generic (PLEG): container finished" podID="96c2d30c-48a2-452c-b8f7-ba3fa176c801" containerID="58c4294bd20f1967ce0c8720bb18ba90ed830954a7abd58ef87ab3c03557c172" exitCode=0 Mar 10 20:00:02 crc kubenswrapper[4861]: I0310 20:00:02.239074 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" event={"ID":"96c2d30c-48a2-452c-b8f7-ba3fa176c801","Type":"ContainerDied","Data":"58c4294bd20f1967ce0c8720bb18ba90ed830954a7abd58ef87ab3c03557c172"} Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.636556 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.746545 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume\") pod \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.746619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrxz5\" (UniqueName: \"kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5\") pod \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.746657 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume\") pod \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\" (UID: \"96c2d30c-48a2-452c-b8f7-ba3fa176c801\") " Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.747973 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume" (OuterVolumeSpecName: "config-volume") pod "96c2d30c-48a2-452c-b8f7-ba3fa176c801" (UID: "96c2d30c-48a2-452c-b8f7-ba3fa176c801"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.752780 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96c2d30c-48a2-452c-b8f7-ba3fa176c801" (UID: "96c2d30c-48a2-452c-b8f7-ba3fa176c801"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.753815 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5" (OuterVolumeSpecName: "kube-api-access-jrxz5") pod "96c2d30c-48a2-452c-b8f7-ba3fa176c801" (UID: "96c2d30c-48a2-452c-b8f7-ba3fa176c801"). InnerVolumeSpecName "kube-api-access-jrxz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.848688 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c2d30c-48a2-452c-b8f7-ba3fa176c801-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.848758 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrxz5\" (UniqueName: \"kubernetes.io/projected/96c2d30c-48a2-452c-b8f7-ba3fa176c801-kube-api-access-jrxz5\") on node \"crc\" DevicePath \"\"" Mar 10 20:00:03 crc kubenswrapper[4861]: I0310 20:00:03.848779 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96c2d30c-48a2-452c-b8f7-ba3fa176c801-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.260965 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" event={"ID":"96c2d30c-48a2-452c-b8f7-ba3fa176c801","Type":"ContainerDied","Data":"aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8"} Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.261021 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea986db726031e9e5187b3762df9cd68040e898c69a89d947bfe5d7f8cfa5d8" Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.261046 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff" Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.734951 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw"] Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.742035 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552835-9kqxw"] Mar 10 20:00:04 crc kubenswrapper[4861]: I0310 20:00:04.973283 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521110b4-5c02-417b-b8bd-19258d60721a" path="/var/lib/kubelet/pods/521110b4-5c02-417b-b8bd-19258d60721a/volumes" Mar 10 20:00:15 crc kubenswrapper[4861]: I0310 20:00:15.388764 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" event={"ID":"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b","Type":"ContainerStarted","Data":"01ef61adccb484cfad0683e78815894964cf98b1de765ed00fdee4b82e75d6f3"} Mar 10 20:00:15 crc kubenswrapper[4861]: I0310 20:00:15.406200 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" podStartSLOduration=1.195319414 podStartE2EDuration="15.406173793s" podCreationTimestamp="2026-03-10 20:00:00 +0000 UTC" firstStartedPulling="2026-03-10 20:00:00.819300999 +0000 UTC m=+4344.582736969" lastFinishedPulling="2026-03-10 20:00:15.030155348 +0000 UTC m=+4358.793591348" observedRunningTime="2026-03-10 20:00:15.404778575 +0000 UTC m=+4359.168214625" watchObservedRunningTime="2026-03-10 20:00:15.406173793 +0000 UTC m=+4359.169609793" Mar 10 20:00:16 crc kubenswrapper[4861]: I0310 20:00:16.400352 4861 generic.go:334] "Generic (PLEG): container finished" podID="6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" containerID="01ef61adccb484cfad0683e78815894964cf98b1de765ed00fdee4b82e75d6f3" exitCode=0 Mar 10 20:00:16 crc kubenswrapper[4861]: I0310 20:00:16.400415 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" event={"ID":"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b","Type":"ContainerDied","Data":"01ef61adccb484cfad0683e78815894964cf98b1de765ed00fdee4b82e75d6f3"} Mar 10 20:00:17 crc kubenswrapper[4861]: I0310 20:00:17.861660 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.001931 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzn78\" (UniqueName: \"kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78\") pod \"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b\" (UID: \"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b\") " Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.009921 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78" (OuterVolumeSpecName: "kube-api-access-zzn78") pod "6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" (UID: "6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b"). InnerVolumeSpecName "kube-api-access-zzn78". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.104059 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzn78\" (UniqueName: \"kubernetes.io/projected/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b-kube-api-access-zzn78\") on node \"crc\" DevicePath \"\"" Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.418657 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" event={"ID":"6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b","Type":"ContainerDied","Data":"9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3"} Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.419117 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9edbeb83e1e79596381ba40361ade15825b527c48003a1e59f5b9b4402b03ef3" Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.418783 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552880-kvbr4" Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.478637 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552874-flgbz"] Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.483377 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552874-flgbz"] Mar 10 20:00:18 crc kubenswrapper[4861]: I0310 20:00:18.971967 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd3e944-6f7e-47aa-8767-86ccaf04e193" path="/var/lib/kubelet/pods/dfd3e944-6f7e-47aa-8767-86ccaf04e193/volumes" Mar 10 20:00:51 crc kubenswrapper[4861]: I0310 20:00:51.992663 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:00:51 crc kubenswrapper[4861]: I0310 20:00:51.993530 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:00:56 crc kubenswrapper[4861]: I0310 20:00:56.274786 4861 scope.go:117] "RemoveContainer" containerID="2ad6ead639225a06cd58adc72e687680a0246472ce173a4d7e75b0635551d57b" Mar 10 20:00:56 crc kubenswrapper[4861]: I0310 20:00:56.317215 4861 scope.go:117] "RemoveContainer" containerID="3c5d9d624d54fd85ac885c670d6173a33fde09821c0d09c48d8f931fb132e6c1" Mar 10 20:01:21 crc kubenswrapper[4861]: I0310 20:01:21.991844 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:01:21 crc kubenswrapper[4861]: I0310 20:01:21.992529 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:01:51 crc kubenswrapper[4861]: I0310 20:01:51.992454 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:01:51 crc kubenswrapper[4861]: I0310 20:01:51.993947 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:01:51 crc kubenswrapper[4861]: I0310 20:01:51.994047 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:01:51 crc kubenswrapper[4861]: I0310 20:01:51.994987 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:01:51 crc kubenswrapper[4861]: I0310 20:01:51.995098 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6" gracePeriod=600 Mar 10 20:01:52 crc kubenswrapper[4861]: I0310 20:01:52.333333 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6" exitCode=0 Mar 10 20:01:52 crc kubenswrapper[4861]: I0310 20:01:52.333425 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6"} Mar 10 20:01:52 crc kubenswrapper[4861]: I0310 20:01:52.333866 4861 scope.go:117] "RemoveContainer" containerID="c3ec1a2ec165344b8b1084cb5463b0d256bafd1af0a85ab3b540a201851e68d6" Mar 10 20:01:53 crc kubenswrapper[4861]: I0310 20:01:53.351368 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943"} Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.144909 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552882-dpjpg"] Mar 10 20:02:00 crc kubenswrapper[4861]: E0310 20:02:00.145727 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c2d30c-48a2-452c-b8f7-ba3fa176c801" containerName="collect-profiles" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.145738 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c2d30c-48a2-452c-b8f7-ba3fa176c801" containerName="collect-profiles" Mar 10 20:02:00 crc kubenswrapper[4861]: E0310 20:02:00.145760 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" containerName="oc" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.145766 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" containerName="oc" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.145908 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c2d30c-48a2-452c-b8f7-ba3fa176c801" containerName="collect-profiles" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.145927 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" containerName="oc" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.146341 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.148298 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.148380 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.148446 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.154098 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552882-dpjpg"] Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.312185 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfx2\" (UniqueName: \"kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2\") pod \"auto-csr-approver-29552882-dpjpg\" (UID: \"f4bdef02-3a9a-495f-aab2-6b79dbef7114\") " pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.413118 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddfx2\" (UniqueName: \"kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2\") pod \"auto-csr-approver-29552882-dpjpg\" (UID: \"f4bdef02-3a9a-495f-aab2-6b79dbef7114\") " pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.433759 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddfx2\" (UniqueName: \"kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2\") pod \"auto-csr-approver-29552882-dpjpg\" (UID: \"f4bdef02-3a9a-495f-aab2-6b79dbef7114\") " pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.505953 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.710527 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552882-dpjpg"] Mar 10 20:02:00 crc kubenswrapper[4861]: I0310 20:02:00.721598 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:02:01 crc kubenswrapper[4861]: I0310 20:02:01.419446 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" event={"ID":"f4bdef02-3a9a-495f-aab2-6b79dbef7114","Type":"ContainerStarted","Data":"9a7433fc2c6c3a76853f3caa189bf8ca3218f80aa9d2ca71609bff434743d80a"} Mar 10 20:02:02 crc kubenswrapper[4861]: I0310 20:02:02.462858 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" event={"ID":"f4bdef02-3a9a-495f-aab2-6b79dbef7114","Type":"ContainerStarted","Data":"cb0cae0ad1ffb5c10b478127b094343fc95d478829deebefcdf21840be54bf67"} Mar 10 20:02:02 crc kubenswrapper[4861]: I0310 20:02:02.477230 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" podStartSLOduration=1.2543681389999999 podStartE2EDuration="2.477190255s" podCreationTimestamp="2026-03-10 20:02:00 +0000 UTC" firstStartedPulling="2026-03-10 20:02:00.721249435 +0000 UTC m=+4464.484685395" lastFinishedPulling="2026-03-10 20:02:01.944071521 +0000 UTC m=+4465.707507511" observedRunningTime="2026-03-10 20:02:02.476069594 +0000 UTC m=+4466.239505554" watchObservedRunningTime="2026-03-10 20:02:02.477190255 +0000 UTC m=+4466.240626235" Mar 10 20:02:03 crc kubenswrapper[4861]: I0310 20:02:03.474066 4861 generic.go:334] "Generic (PLEG): container finished" podID="f4bdef02-3a9a-495f-aab2-6b79dbef7114" containerID="cb0cae0ad1ffb5c10b478127b094343fc95d478829deebefcdf21840be54bf67" exitCode=0 Mar 10 20:02:03 crc kubenswrapper[4861]: I0310 20:02:03.474327 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" event={"ID":"f4bdef02-3a9a-495f-aab2-6b79dbef7114","Type":"ContainerDied","Data":"cb0cae0ad1ffb5c10b478127b094343fc95d478829deebefcdf21840be54bf67"} Mar 10 20:02:04 crc kubenswrapper[4861]: I0310 20:02:04.906442 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:04.997097 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddfx2\" (UniqueName: \"kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2\") pod \"f4bdef02-3a9a-495f-aab2-6b79dbef7114\" (UID: \"f4bdef02-3a9a-495f-aab2-6b79dbef7114\") " Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.011730 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2" (OuterVolumeSpecName: "kube-api-access-ddfx2") pod "f4bdef02-3a9a-495f-aab2-6b79dbef7114" (UID: "f4bdef02-3a9a-495f-aab2-6b79dbef7114"). InnerVolumeSpecName "kube-api-access-ddfx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.099154 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddfx2\" (UniqueName: \"kubernetes.io/projected/f4bdef02-3a9a-495f-aab2-6b79dbef7114-kube-api-access-ddfx2\") on node \"crc\" DevicePath \"\"" Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.494439 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" event={"ID":"f4bdef02-3a9a-495f-aab2-6b79dbef7114","Type":"ContainerDied","Data":"9a7433fc2c6c3a76853f3caa189bf8ca3218f80aa9d2ca71609bff434743d80a"} Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.494770 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7433fc2c6c3a76853f3caa189bf8ca3218f80aa9d2ca71609bff434743d80a" Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.494517 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552882-dpjpg" Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.571938 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552876-zxgkd"] Mar 10 20:02:05 crc kubenswrapper[4861]: I0310 20:02:05.579166 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552876-zxgkd"] Mar 10 20:02:06 crc kubenswrapper[4861]: I0310 20:02:06.967697 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321502cb-96d6-431e-8cf9-05fcea2fb723" path="/var/lib/kubelet/pods/321502cb-96d6-431e-8cf9-05fcea2fb723/volumes" Mar 10 20:02:56 crc kubenswrapper[4861]: I0310 20:02:56.477827 4861 scope.go:117] "RemoveContainer" containerID="e3f6a2ffb900648175a933cc513817f58a16c851f19ce55592c0af4123197b32" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.154059 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552884-qzfsx"] Mar 10 20:04:00 crc kubenswrapper[4861]: E0310 20:04:00.155210 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4bdef02-3a9a-495f-aab2-6b79dbef7114" containerName="oc" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.155235 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4bdef02-3a9a-495f-aab2-6b79dbef7114" containerName="oc" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.155494 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4bdef02-3a9a-495f-aab2-6b79dbef7114" containerName="oc" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.156237 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.157981 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.158297 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.159034 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.175321 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552884-qzfsx"] Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.316846 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55rkv\" (UniqueName: \"kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv\") pod \"auto-csr-approver-29552884-qzfsx\" (UID: \"99b5c29d-c540-4a03-b6d3-369340c81359\") " pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.418113 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55rkv\" (UniqueName: \"kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv\") pod \"auto-csr-approver-29552884-qzfsx\" (UID: \"99b5c29d-c540-4a03-b6d3-369340c81359\") " pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.450963 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55rkv\" (UniqueName: \"kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv\") pod \"auto-csr-approver-29552884-qzfsx\" (UID: \"99b5c29d-c540-4a03-b6d3-369340c81359\") " pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.529813 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:00 crc kubenswrapper[4861]: I0310 20:04:00.837045 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552884-qzfsx"] Mar 10 20:04:01 crc kubenswrapper[4861]: I0310 20:04:01.231451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" event={"ID":"99b5c29d-c540-4a03-b6d3-369340c81359","Type":"ContainerStarted","Data":"62107d2f9da672bdd46a83af1027c86f6259cf8e8a32aef625d2dc5a406ac8f9"} Mar 10 20:04:02 crc kubenswrapper[4861]: I0310 20:04:02.241665 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" event={"ID":"99b5c29d-c540-4a03-b6d3-369340c81359","Type":"ContainerStarted","Data":"5ad05c468f856e454f51904b827c281cab64cd11815768f5170a3f8fb8952fb9"} Mar 10 20:04:02 crc kubenswrapper[4861]: I0310 20:04:02.259037 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" podStartSLOduration=1.408741871 podStartE2EDuration="2.259006145s" podCreationTimestamp="2026-03-10 20:04:00 +0000 UTC" firstStartedPulling="2026-03-10 20:04:00.85064828 +0000 UTC m=+4584.614084240" lastFinishedPulling="2026-03-10 20:04:01.700912554 +0000 UTC m=+4585.464348514" observedRunningTime="2026-03-10 20:04:02.256120547 +0000 UTC m=+4586.019556517" watchObservedRunningTime="2026-03-10 20:04:02.259006145 +0000 UTC m=+4586.022442145" Mar 10 20:04:03 crc kubenswrapper[4861]: I0310 20:04:03.253443 4861 generic.go:334] "Generic (PLEG): container finished" podID="99b5c29d-c540-4a03-b6d3-369340c81359" containerID="5ad05c468f856e454f51904b827c281cab64cd11815768f5170a3f8fb8952fb9" exitCode=0 Mar 10 20:04:03 crc kubenswrapper[4861]: I0310 20:04:03.253510 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" event={"ID":"99b5c29d-c540-4a03-b6d3-369340c81359","Type":"ContainerDied","Data":"5ad05c468f856e454f51904b827c281cab64cd11815768f5170a3f8fb8952fb9"} Mar 10 20:04:04 crc kubenswrapper[4861]: I0310 20:04:04.660216 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:04 crc kubenswrapper[4861]: I0310 20:04:04.792698 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55rkv\" (UniqueName: \"kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv\") pod \"99b5c29d-c540-4a03-b6d3-369340c81359\" (UID: \"99b5c29d-c540-4a03-b6d3-369340c81359\") " Mar 10 20:04:04 crc kubenswrapper[4861]: I0310 20:04:04.799622 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv" (OuterVolumeSpecName: "kube-api-access-55rkv") pod "99b5c29d-c540-4a03-b6d3-369340c81359" (UID: "99b5c29d-c540-4a03-b6d3-369340c81359"). InnerVolumeSpecName "kube-api-access-55rkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:04:04 crc kubenswrapper[4861]: I0310 20:04:04.894801 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55rkv\" (UniqueName: \"kubernetes.io/projected/99b5c29d-c540-4a03-b6d3-369340c81359-kube-api-access-55rkv\") on node \"crc\" DevicePath \"\"" Mar 10 20:04:05 crc kubenswrapper[4861]: I0310 20:04:05.275382 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" event={"ID":"99b5c29d-c540-4a03-b6d3-369340c81359","Type":"ContainerDied","Data":"62107d2f9da672bdd46a83af1027c86f6259cf8e8a32aef625d2dc5a406ac8f9"} Mar 10 20:04:05 crc kubenswrapper[4861]: I0310 20:04:05.275456 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62107d2f9da672bdd46a83af1027c86f6259cf8e8a32aef625d2dc5a406ac8f9" Mar 10 20:04:05 crc kubenswrapper[4861]: I0310 20:04:05.275470 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552884-qzfsx" Mar 10 20:04:05 crc kubenswrapper[4861]: I0310 20:04:05.345203 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552878-jrhbh"] Mar 10 20:04:05 crc kubenswrapper[4861]: I0310 20:04:05.352789 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552878-jrhbh"] Mar 10 20:04:06 crc kubenswrapper[4861]: I0310 20:04:06.974607 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70" path="/var/lib/kubelet/pods/ef6dfbb2-943a-4b05-8aad-b0f4ee2e4d70/volumes" Mar 10 20:04:21 crc kubenswrapper[4861]: I0310 20:04:21.992229 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:04:21 crc kubenswrapper[4861]: I0310 20:04:21.992950 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:04:51 crc kubenswrapper[4861]: I0310 20:04:51.992829 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:04:51 crc kubenswrapper[4861]: I0310 20:04:51.994105 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:04:56 crc kubenswrapper[4861]: I0310 20:04:56.600553 4861 scope.go:117] "RemoveContainer" containerID="a46ff1bd27bd4f2916fd340f4943cdac6f2751b721ae5bac5e04b069f96754ae" Mar 10 20:05:21 crc kubenswrapper[4861]: I0310 20:05:21.991787 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:05:21 crc kubenswrapper[4861]: I0310 20:05:21.992348 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:05:21 crc kubenswrapper[4861]: I0310 20:05:21.992402 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:05:21 crc kubenswrapper[4861]: I0310 20:05:21.993289 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:05:21 crc kubenswrapper[4861]: I0310 20:05:21.993425 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" gracePeriod=600 Mar 10 20:05:22 crc kubenswrapper[4861]: E0310 20:05:22.130252 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:05:22 crc kubenswrapper[4861]: I0310 20:05:22.975226 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" exitCode=0 Mar 10 20:05:22 crc kubenswrapper[4861]: I0310 20:05:22.975260 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943"} Mar 10 20:05:22 crc kubenswrapper[4861]: I0310 20:05:22.976263 4861 scope.go:117] "RemoveContainer" containerID="05fba8d9269db8e6b59dbfe873fa76fe9d4c7570941ecdd9929f45e6e64571c6" Mar 10 20:05:22 crc kubenswrapper[4861]: I0310 20:05:22.976962 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:05:22 crc kubenswrapper[4861]: E0310 20:05:22.977363 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:05:36 crc kubenswrapper[4861]: I0310 20:05:36.978806 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:05:36 crc kubenswrapper[4861]: E0310 20:05:36.979948 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.749561 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:05:46 crc kubenswrapper[4861]: E0310 20:05:46.750990 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99b5c29d-c540-4a03-b6d3-369340c81359" containerName="oc" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.751018 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="99b5c29d-c540-4a03-b6d3-369340c81359" containerName="oc" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.751367 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="99b5c29d-c540-4a03-b6d3-369340c81359" containerName="oc" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.753543 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.767132 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.948037 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.948316 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kmr\" (UniqueName: \"kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:46 crc kubenswrapper[4861]: I0310 20:05:46.948411 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.050699 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.050829 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.050910 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5kmr\" (UniqueName: \"kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.051342 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.051491 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.073918 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5kmr\" (UniqueName: \"kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr\") pod \"certified-operators-7dt54\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.085910 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.582527 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:05:47 crc kubenswrapper[4861]: I0310 20:05:47.958405 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:05:47 crc kubenswrapper[4861]: E0310 20:05:47.958829 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:05:48 crc kubenswrapper[4861]: I0310 20:05:48.203733 4861 generic.go:334] "Generic (PLEG): container finished" podID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerID="ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299" exitCode=0 Mar 10 20:05:48 crc kubenswrapper[4861]: I0310 20:05:48.203793 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerDied","Data":"ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299"} Mar 10 20:05:48 crc kubenswrapper[4861]: I0310 20:05:48.203839 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerStarted","Data":"4c7fd45af47bfd8259079623f51a37f06e15db831241e1a11b20a453431afe80"} Mar 10 20:05:49 crc kubenswrapper[4861]: I0310 20:05:49.212109 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerStarted","Data":"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55"} Mar 10 20:05:50 crc kubenswrapper[4861]: I0310 20:05:50.234789 4861 generic.go:334] "Generic (PLEG): container finished" podID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerID="9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55" exitCode=0 Mar 10 20:05:50 crc kubenswrapper[4861]: I0310 20:05:50.234872 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerDied","Data":"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55"} Mar 10 20:05:51 crc kubenswrapper[4861]: I0310 20:05:51.247967 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerStarted","Data":"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5"} Mar 10 20:05:51 crc kubenswrapper[4861]: I0310 20:05:51.287899 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dt54" podStartSLOduration=2.856148428 podStartE2EDuration="5.287871881s" podCreationTimestamp="2026-03-10 20:05:46 +0000 UTC" firstStartedPulling="2026-03-10 20:05:48.205870856 +0000 UTC m=+4691.969306856" lastFinishedPulling="2026-03-10 20:05:50.637594339 +0000 UTC m=+4694.401030309" observedRunningTime="2026-03-10 20:05:51.274924199 +0000 UTC m=+4695.038360239" watchObservedRunningTime="2026-03-10 20:05:51.287871881 +0000 UTC m=+4695.051307881" Mar 10 20:05:57 crc kubenswrapper[4861]: I0310 20:05:57.086065 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:57 crc kubenswrapper[4861]: I0310 20:05:57.086751 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:57 crc kubenswrapper[4861]: I0310 20:05:57.160432 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:57 crc kubenswrapper[4861]: I0310 20:05:57.373466 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:57 crc kubenswrapper[4861]: I0310 20:05:57.440616 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.318020 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7dt54" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="registry-server" containerID="cri-o://4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5" gracePeriod=2 Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.783615 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.863377 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5kmr\" (UniqueName: \"kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr\") pod \"d55a097c-8b6b-41ce-9e7f-72866050cfed\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.863901 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities\") pod \"d55a097c-8b6b-41ce-9e7f-72866050cfed\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.864266 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content\") pod \"d55a097c-8b6b-41ce-9e7f-72866050cfed\" (UID: \"d55a097c-8b6b-41ce-9e7f-72866050cfed\") " Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.864979 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities" (OuterVolumeSpecName: "utilities") pod "d55a097c-8b6b-41ce-9e7f-72866050cfed" (UID: "d55a097c-8b6b-41ce-9e7f-72866050cfed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.868748 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr" (OuterVolumeSpecName: "kube-api-access-m5kmr") pod "d55a097c-8b6b-41ce-9e7f-72866050cfed" (UID: "d55a097c-8b6b-41ce-9e7f-72866050cfed"). InnerVolumeSpecName "kube-api-access-m5kmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.966477 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5kmr\" (UniqueName: \"kubernetes.io/projected/d55a097c-8b6b-41ce-9e7f-72866050cfed-kube-api-access-m5kmr\") on node \"crc\" DevicePath \"\"" Mar 10 20:05:59 crc kubenswrapper[4861]: I0310 20:05:59.966506 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.157219 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552886-sqfsv"] Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.157520 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="extract-utilities" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.157533 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="extract-utilities" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.157549 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="registry-server" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.157556 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="registry-server" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.157567 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="extract-content" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.157573 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="extract-content" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.157691 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerName="registry-server" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.158142 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.164341 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.164420 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.164420 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.167811 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82nqx\" (UniqueName: \"kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx\") pod \"auto-csr-approver-29552886-sqfsv\" (UID: \"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da\") " pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.198013 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552886-sqfsv"] Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.269186 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82nqx\" (UniqueName: \"kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx\") pod \"auto-csr-approver-29552886-sqfsv\" (UID: \"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da\") " pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.283498 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82nqx\" (UniqueName: \"kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx\") pod \"auto-csr-approver-29552886-sqfsv\" (UID: \"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da\") " pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.325054 4861 generic.go:334] "Generic (PLEG): container finished" podID="d55a097c-8b6b-41ce-9e7f-72866050cfed" containerID="4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5" exitCode=0 Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.325097 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerDied","Data":"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5"} Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.325119 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dt54" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.325136 4861 scope.go:117] "RemoveContainer" containerID="4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.325125 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dt54" event={"ID":"d55a097c-8b6b-41ce-9e7f-72866050cfed","Type":"ContainerDied","Data":"4c7fd45af47bfd8259079623f51a37f06e15db831241e1a11b20a453431afe80"} Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.327580 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d55a097c-8b6b-41ce-9e7f-72866050cfed" (UID: "d55a097c-8b6b-41ce-9e7f-72866050cfed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.340105 4861 scope.go:117] "RemoveContainer" containerID="9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.354744 4861 scope.go:117] "RemoveContainer" containerID="ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.370239 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55a097c-8b6b-41ce-9e7f-72866050cfed-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.374138 4861 scope.go:117] "RemoveContainer" containerID="4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.374450 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5\": container with ID starting with 4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5 not found: ID does not exist" containerID="4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.374481 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5"} err="failed to get container status \"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5\": rpc error: code = NotFound desc = could not find container \"4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5\": container with ID starting with 4244246ad3214bc13714b464fe8bb5b9114a85522aa1ec79c1c1dc2a616101d5 not found: ID does not exist" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.374502 4861 scope.go:117] "RemoveContainer" containerID="9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.386144 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55\": container with ID starting with 9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55 not found: ID does not exist" containerID="9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.386190 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55"} err="failed to get container status \"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55\": rpc error: code = NotFound desc = could not find container \"9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55\": container with ID starting with 9add75921ecc3a075e07b1632763c965af1cace5d69fd0bae7a90c39677dfe55 not found: ID does not exist" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.386217 4861 scope.go:117] "RemoveContainer" containerID="ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.386813 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299\": container with ID starting with ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299 not found: ID does not exist" containerID="ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.386888 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299"} err="failed to get container status \"ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299\": rpc error: code = NotFound desc = could not find container \"ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299\": container with ID starting with ed76680bfd70b07d1c21e3c31c1f33f5e610bab4f7e9d6d6bfea1cda2ac3c299 not found: ID does not exist" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.539918 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.674797 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.681900 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7dt54"] Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.958575 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:06:00 crc kubenswrapper[4861]: E0310 20:06:00.959141 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:06:00 crc kubenswrapper[4861]: I0310 20:06:00.967975 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55a097c-8b6b-41ce-9e7f-72866050cfed" path="/var/lib/kubelet/pods/d55a097c-8b6b-41ce-9e7f-72866050cfed/volumes" Mar 10 20:06:01 crc kubenswrapper[4861]: I0310 20:06:01.071328 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552886-sqfsv"] Mar 10 20:06:01 crc kubenswrapper[4861]: I0310 20:06:01.345455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" event={"ID":"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da","Type":"ContainerStarted","Data":"a531d1ede5fd2d8ef21ff005c1dc7607819e9ab525097b7b11b61b80db6ea22f"} Mar 10 20:06:03 crc kubenswrapper[4861]: I0310 20:06:03.367958 4861 generic.go:334] "Generic (PLEG): container finished" podID="58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" containerID="270c69547db9be7067c9fe9fbeb9df29e0cb05209196cbf14fd552f71a485960" exitCode=0 Mar 10 20:06:03 crc kubenswrapper[4861]: I0310 20:06:03.368083 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" event={"ID":"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da","Type":"ContainerDied","Data":"270c69547db9be7067c9fe9fbeb9df29e0cb05209196cbf14fd552f71a485960"} Mar 10 20:06:04 crc kubenswrapper[4861]: I0310 20:06:04.778025 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:04 crc kubenswrapper[4861]: I0310 20:06:04.941755 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82nqx\" (UniqueName: \"kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx\") pod \"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da\" (UID: \"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da\") " Mar 10 20:06:04 crc kubenswrapper[4861]: I0310 20:06:04.948885 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx" (OuterVolumeSpecName: "kube-api-access-82nqx") pod "58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" (UID: "58a6e34c-985b-4caa-9a5e-0fdbcde5c0da"). InnerVolumeSpecName "kube-api-access-82nqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.042833 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82nqx\" (UniqueName: \"kubernetes.io/projected/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da-kube-api-access-82nqx\") on node \"crc\" DevicePath \"\"" Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.391080 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" event={"ID":"58a6e34c-985b-4caa-9a5e-0fdbcde5c0da","Type":"ContainerDied","Data":"a531d1ede5fd2d8ef21ff005c1dc7607819e9ab525097b7b11b61b80db6ea22f"} Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.391130 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a531d1ede5fd2d8ef21ff005c1dc7607819e9ab525097b7b11b61b80db6ea22f" Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.391202 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552886-sqfsv" Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.883926 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552880-kvbr4"] Mar 10 20:06:05 crc kubenswrapper[4861]: I0310 20:06:05.900933 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552880-kvbr4"] Mar 10 20:06:06 crc kubenswrapper[4861]: I0310 20:06:06.973238 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b" path="/var/lib/kubelet/pods/6aac7e52-0d3c-4c33-a7f2-7dbce3d9714b/volumes" Mar 10 20:06:16 crc kubenswrapper[4861]: I0310 20:06:15.959197 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:06:16 crc kubenswrapper[4861]: E0310 20:06:15.960063 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:06:16 crc kubenswrapper[4861]: I0310 20:06:16.721167 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" podUID="56a6874c-f929-4490-a247-52542d4aa8f1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 20:06:16 crc kubenswrapper[4861]: I0310 20:06:16.721282 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-q76xg" podUID="56a6874c-f929-4490-a247-52542d4aa8f1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 20:06:16 crc kubenswrapper[4861]: I0310 20:06:16.721352 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-rzrtw" podUID="54044555-52fe-44c6-9e47-7f2748a8b114" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.66:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 10 20:06:28 crc kubenswrapper[4861]: I0310 20:06:28.957967 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:06:28 crc kubenswrapper[4861]: E0310 20:06:28.959130 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:06:42 crc kubenswrapper[4861]: I0310 20:06:42.959108 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:06:42 crc kubenswrapper[4861]: E0310 20:06:42.960446 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:06:53 crc kubenswrapper[4861]: I0310 20:06:53.958473 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:06:53 crc kubenswrapper[4861]: E0310 20:06:53.959620 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:06:56 crc kubenswrapper[4861]: I0310 20:06:56.718194 4861 scope.go:117] "RemoveContainer" containerID="01ef61adccb484cfad0683e78815894964cf98b1de765ed00fdee4b82e75d6f3" Mar 10 20:07:04 crc kubenswrapper[4861]: I0310 20:07:04.959742 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:07:04 crc kubenswrapper[4861]: E0310 20:07:04.960800 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:07:18 crc kubenswrapper[4861]: I0310 20:07:18.958866 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:07:18 crc kubenswrapper[4861]: E0310 20:07:18.960315 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:07:32 crc kubenswrapper[4861]: I0310 20:07:32.959227 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:07:32 crc kubenswrapper[4861]: E0310 20:07:32.960324 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:07:44 crc kubenswrapper[4861]: I0310 20:07:44.959205 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:07:44 crc kubenswrapper[4861]: E0310 20:07:44.960247 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.757275 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:07:50 crc kubenswrapper[4861]: E0310 20:07:50.759346 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" containerName="oc" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.759425 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" containerName="oc" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.759641 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" containerName="oc" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.760636 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.773112 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.844487 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.844567 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.844643 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9j4\" (UniqueName: \"kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.946451 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.946526 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.946595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9j4\" (UniqueName: \"kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.947191 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.947345 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:50 crc kubenswrapper[4861]: I0310 20:07:50.970353 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9j4\" (UniqueName: \"kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4\") pod \"community-operators-vbmcl\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:51 crc kubenswrapper[4861]: I0310 20:07:51.095880 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:07:51 crc kubenswrapper[4861]: I0310 20:07:51.549350 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:07:52 crc kubenswrapper[4861]: I0310 20:07:52.547407 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerID="b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b" exitCode=0 Mar 10 20:07:52 crc kubenswrapper[4861]: I0310 20:07:52.547534 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerDied","Data":"b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b"} Mar 10 20:07:52 crc kubenswrapper[4861]: I0310 20:07:52.547704 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerStarted","Data":"c9af3e146d3f7dba80e3bb581f47b1ceba3013e995ef0068e9f58622a032d4e4"} Mar 10 20:07:52 crc kubenswrapper[4861]: I0310 20:07:52.550541 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:07:54 crc kubenswrapper[4861]: I0310 20:07:54.569918 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerID="1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333" exitCode=0 Mar 10 20:07:54 crc kubenswrapper[4861]: I0310 20:07:54.570039 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerDied","Data":"1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333"} Mar 10 20:07:55 crc kubenswrapper[4861]: I0310 20:07:55.580662 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerStarted","Data":"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0"} Mar 10 20:07:55 crc kubenswrapper[4861]: I0310 20:07:55.613825 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbmcl" podStartSLOduration=3.138003955 podStartE2EDuration="5.613800842s" podCreationTimestamp="2026-03-10 20:07:50 +0000 UTC" firstStartedPulling="2026-03-10 20:07:52.550113344 +0000 UTC m=+4816.313549344" lastFinishedPulling="2026-03-10 20:07:55.025910281 +0000 UTC m=+4818.789346231" observedRunningTime="2026-03-10 20:07:55.605742653 +0000 UTC m=+4819.369178653" watchObservedRunningTime="2026-03-10 20:07:55.613800842 +0000 UTC m=+4819.377236842" Mar 10 20:07:57 crc kubenswrapper[4861]: I0310 20:07:57.959206 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:07:57 crc kubenswrapper[4861]: E0310 20:07:57.961589 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.166792 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552888-dkjfg"] Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.167943 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.177394 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.177601 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.177673 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.178961 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552888-dkjfg"] Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.310137 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662ph\" (UniqueName: \"kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph\") pod \"auto-csr-approver-29552888-dkjfg\" (UID: \"5e5fa081-b5f9-44b3-ba83-5b8401bf883e\") " pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.412028 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662ph\" (UniqueName: \"kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph\") pod \"auto-csr-approver-29552888-dkjfg\" (UID: \"5e5fa081-b5f9-44b3-ba83-5b8401bf883e\") " pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.433151 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662ph\" (UniqueName: \"kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph\") pod \"auto-csr-approver-29552888-dkjfg\" (UID: \"5e5fa081-b5f9-44b3-ba83-5b8401bf883e\") " pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.495231 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:00 crc kubenswrapper[4861]: I0310 20:08:00.976868 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552888-dkjfg"] Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.096478 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.096537 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.146629 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.637824 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" event={"ID":"5e5fa081-b5f9-44b3-ba83-5b8401bf883e","Type":"ContainerStarted","Data":"124222f5dd492b7dbb9ba441d775acb7540b6a32862d80f4b41aefe3131ff89a"} Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.710859 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:01 crc kubenswrapper[4861]: I0310 20:08:01.782080 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:08:02 crc kubenswrapper[4861]: I0310 20:08:02.648202 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" event={"ID":"5e5fa081-b5f9-44b3-ba83-5b8401bf883e","Type":"ContainerStarted","Data":"c5a6eb30b58f3ec8e61cd111b76488edfb38e34c70cdc59b41a00697c133a6e3"} Mar 10 20:08:02 crc kubenswrapper[4861]: I0310 20:08:02.672548 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" podStartSLOduration=1.44073111 podStartE2EDuration="2.672520969s" podCreationTimestamp="2026-03-10 20:08:00 +0000 UTC" firstStartedPulling="2026-03-10 20:08:00.97489221 +0000 UTC m=+4824.738328210" lastFinishedPulling="2026-03-10 20:08:02.206682079 +0000 UTC m=+4825.970118069" observedRunningTime="2026-03-10 20:08:02.663586857 +0000 UTC m=+4826.427022827" watchObservedRunningTime="2026-03-10 20:08:02.672520969 +0000 UTC m=+4826.435956939" Mar 10 20:08:03 crc kubenswrapper[4861]: I0310 20:08:03.661516 4861 generic.go:334] "Generic (PLEG): container finished" podID="5e5fa081-b5f9-44b3-ba83-5b8401bf883e" containerID="c5a6eb30b58f3ec8e61cd111b76488edfb38e34c70cdc59b41a00697c133a6e3" exitCode=0 Mar 10 20:08:03 crc kubenswrapper[4861]: I0310 20:08:03.661799 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" event={"ID":"5e5fa081-b5f9-44b3-ba83-5b8401bf883e","Type":"ContainerDied","Data":"c5a6eb30b58f3ec8e61cd111b76488edfb38e34c70cdc59b41a00697c133a6e3"} Mar 10 20:08:03 crc kubenswrapper[4861]: I0310 20:08:03.662159 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbmcl" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="registry-server" containerID="cri-o://55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0" gracePeriod=2 Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.222217 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.295109 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9j4\" (UniqueName: \"kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4\") pod \"8b3260ac-a49c-4977-9653-b73abc7fb19b\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.295191 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content\") pod \"8b3260ac-a49c-4977-9653-b73abc7fb19b\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.295239 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities\") pod \"8b3260ac-a49c-4977-9653-b73abc7fb19b\" (UID: \"8b3260ac-a49c-4977-9653-b73abc7fb19b\") " Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.296560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities" (OuterVolumeSpecName: "utilities") pod "8b3260ac-a49c-4977-9653-b73abc7fb19b" (UID: "8b3260ac-a49c-4977-9653-b73abc7fb19b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.301965 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4" (OuterVolumeSpecName: "kube-api-access-mw9j4") pod "8b3260ac-a49c-4977-9653-b73abc7fb19b" (UID: "8b3260ac-a49c-4977-9653-b73abc7fb19b"). InnerVolumeSpecName "kube-api-access-mw9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.396742 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9j4\" (UniqueName: \"kubernetes.io/projected/8b3260ac-a49c-4977-9653-b73abc7fb19b-kube-api-access-mw9j4\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.397003 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.419382 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b3260ac-a49c-4977-9653-b73abc7fb19b" (UID: "8b3260ac-a49c-4977-9653-b73abc7fb19b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.498803 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b3260ac-a49c-4977-9653-b73abc7fb19b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.673432 4861 generic.go:334] "Generic (PLEG): container finished" podID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerID="55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0" exitCode=0 Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.673513 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerDied","Data":"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0"} Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.673577 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbmcl" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.673596 4861 scope.go:117] "RemoveContainer" containerID="55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.673582 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbmcl" event={"ID":"8b3260ac-a49c-4977-9653-b73abc7fb19b","Type":"ContainerDied","Data":"c9af3e146d3f7dba80e3bb581f47b1ceba3013e995ef0068e9f58622a032d4e4"} Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.704339 4861 scope.go:117] "RemoveContainer" containerID="1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.723090 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.743502 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbmcl"] Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.745131 4861 scope.go:117] "RemoveContainer" containerID="b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.769002 4861 scope.go:117] "RemoveContainer" containerID="55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0" Mar 10 20:08:04 crc kubenswrapper[4861]: E0310 20:08:04.769481 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0\": container with ID starting with 55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0 not found: ID does not exist" containerID="55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.769532 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0"} err="failed to get container status \"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0\": rpc error: code = NotFound desc = could not find container \"55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0\": container with ID starting with 55a22b2bcabe40637cbcefac714cbd8a324afe79cc199a1d2b83b8d010ec50b0 not found: ID does not exist" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.769571 4861 scope.go:117] "RemoveContainer" containerID="1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333" Mar 10 20:08:04 crc kubenswrapper[4861]: E0310 20:08:04.769929 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333\": container with ID starting with 1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333 not found: ID does not exist" containerID="1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.769985 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333"} err="failed to get container status \"1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333\": rpc error: code = NotFound desc = could not find container \"1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333\": container with ID starting with 1e98d1ae3b6efc3b15ed739ffd6991438347e779a31938108297842f91d55333 not found: ID does not exist" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.770024 4861 scope.go:117] "RemoveContainer" containerID="b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b" Mar 10 20:08:04 crc kubenswrapper[4861]: E0310 20:08:04.770622 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b\": container with ID starting with b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b not found: ID does not exist" containerID="b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.770671 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b"} err="failed to get container status \"b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b\": rpc error: code = NotFound desc = could not find container \"b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b\": container with ID starting with b0523011832ab5f32240e9cab7187d64c263d90c57179f8dd339a9d93bf1757b not found: ID does not exist" Mar 10 20:08:04 crc kubenswrapper[4861]: E0310 20:08:04.785148 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b3260ac_a49c_4977_9653_b73abc7fb19b.slice\": RecentStats: unable to find data in memory cache]" Mar 10 20:08:04 crc kubenswrapper[4861]: I0310 20:08:04.966812 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" path="/var/lib/kubelet/pods/8b3260ac-a49c-4977-9653-b73abc7fb19b/volumes" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.008950 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.106151 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-662ph\" (UniqueName: \"kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph\") pod \"5e5fa081-b5f9-44b3-ba83-5b8401bf883e\" (UID: \"5e5fa081-b5f9-44b3-ba83-5b8401bf883e\") " Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.112405 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph" (OuterVolumeSpecName: "kube-api-access-662ph") pod "5e5fa081-b5f9-44b3-ba83-5b8401bf883e" (UID: "5e5fa081-b5f9-44b3-ba83-5b8401bf883e"). InnerVolumeSpecName "kube-api-access-662ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.208604 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-662ph\" (UniqueName: \"kubernetes.io/projected/5e5fa081-b5f9-44b3-ba83-5b8401bf883e-kube-api-access-662ph\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.691828 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" event={"ID":"5e5fa081-b5f9-44b3-ba83-5b8401bf883e","Type":"ContainerDied","Data":"124222f5dd492b7dbb9ba441d775acb7540b6a32862d80f4b41aefe3131ff89a"} Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.691888 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124222f5dd492b7dbb9ba441d775acb7540b6a32862d80f4b41aefe3131ff89a" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.691995 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552888-dkjfg" Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.757850 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552882-dpjpg"] Mar 10 20:08:05 crc kubenswrapper[4861]: I0310 20:08:05.765661 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552882-dpjpg"] Mar 10 20:08:06 crc kubenswrapper[4861]: I0310 20:08:06.991623 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4bdef02-3a9a-495f-aab2-6b79dbef7114" path="/var/lib/kubelet/pods/f4bdef02-3a9a-495f-aab2-6b79dbef7114/volumes" Mar 10 20:08:11 crc kubenswrapper[4861]: I0310 20:08:11.958498 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:08:11 crc kubenswrapper[4861]: E0310 20:08:11.959468 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:08:22 crc kubenswrapper[4861]: I0310 20:08:22.958471 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:08:22 crc kubenswrapper[4861]: E0310 20:08:22.959540 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.666678 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:27 crc kubenswrapper[4861]: E0310 20:08:27.667430 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="extract-utilities" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667443 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="extract-utilities" Mar 10 20:08:27 crc kubenswrapper[4861]: E0310 20:08:27.667451 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5fa081-b5f9-44b3-ba83-5b8401bf883e" containerName="oc" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667457 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5fa081-b5f9-44b3-ba83-5b8401bf883e" containerName="oc" Mar 10 20:08:27 crc kubenswrapper[4861]: E0310 20:08:27.667475 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="extract-content" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667482 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="extract-content" Mar 10 20:08:27 crc kubenswrapper[4861]: E0310 20:08:27.667499 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="registry-server" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667505 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="registry-server" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667620 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3260ac-a49c-4977-9653-b73abc7fb19b" containerName="registry-server" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.667640 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5fa081-b5f9-44b3-ba83-5b8401bf883e" containerName="oc" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.668530 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.690108 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.780725 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.781163 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrql\" (UniqueName: \"kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.781369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.882761 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.882886 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.883001 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrql\" (UniqueName: \"kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.883793 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.884214 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:27 crc kubenswrapper[4861]: I0310 20:08:27.920351 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrql\" (UniqueName: \"kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql\") pod \"redhat-marketplace-n52z5\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:28 crc kubenswrapper[4861]: I0310 20:08:28.036044 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:28 crc kubenswrapper[4861]: I0310 20:08:28.533359 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:28 crc kubenswrapper[4861]: I0310 20:08:28.920677 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerStarted","Data":"6b71d73bd10d4babef7c2d70f8698cd68da641f5d1f842605982c0709834019a"} Mar 10 20:08:29 crc kubenswrapper[4861]: I0310 20:08:29.934273 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerID="3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1" exitCode=0 Mar 10 20:08:29 crc kubenswrapper[4861]: I0310 20:08:29.934424 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerDied","Data":"3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1"} Mar 10 20:08:31 crc kubenswrapper[4861]: I0310 20:08:31.957506 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerID="a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3" exitCode=0 Mar 10 20:08:31 crc kubenswrapper[4861]: I0310 20:08:31.957573 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerDied","Data":"a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3"} Mar 10 20:08:32 crc kubenswrapper[4861]: I0310 20:08:32.971997 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerStarted","Data":"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e"} Mar 10 20:08:33 crc kubenswrapper[4861]: I0310 20:08:33.023655 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n52z5" podStartSLOduration=3.568663038 podStartE2EDuration="6.023632475s" podCreationTimestamp="2026-03-10 20:08:27 +0000 UTC" firstStartedPulling="2026-03-10 20:08:29.937601796 +0000 UTC m=+4853.701037796" lastFinishedPulling="2026-03-10 20:08:32.392571283 +0000 UTC m=+4856.156007233" observedRunningTime="2026-03-10 20:08:33.003343924 +0000 UTC m=+4856.766779944" watchObservedRunningTime="2026-03-10 20:08:33.023632475 +0000 UTC m=+4856.787068435" Mar 10 20:08:34 crc kubenswrapper[4861]: I0310 20:08:34.958647 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:08:34 crc kubenswrapper[4861]: E0310 20:08:34.959122 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:08:38 crc kubenswrapper[4861]: I0310 20:08:38.036519 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:38 crc kubenswrapper[4861]: I0310 20:08:38.037113 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:38 crc kubenswrapper[4861]: I0310 20:08:38.146755 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:39 crc kubenswrapper[4861]: I0310 20:08:39.101584 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:39 crc kubenswrapper[4861]: I0310 20:08:39.171049 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.044105 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n52z5" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="registry-server" containerID="cri-o://4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e" gracePeriod=2 Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.801109 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.814426 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities\") pod \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.814850 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content\") pod \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.814894 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhrql\" (UniqueName: \"kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql\") pod \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\" (UID: \"c6cb2ca1-2657-4078-86c0-ed3af7559bc7\") " Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.817055 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities" (OuterVolumeSpecName: "utilities") pod "c6cb2ca1-2657-4078-86c0-ed3af7559bc7" (UID: "c6cb2ca1-2657-4078-86c0-ed3af7559bc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.831154 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql" (OuterVolumeSpecName: "kube-api-access-bhrql") pod "c6cb2ca1-2657-4078-86c0-ed3af7559bc7" (UID: "c6cb2ca1-2657-4078-86c0-ed3af7559bc7"). InnerVolumeSpecName "kube-api-access-bhrql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.874041 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6cb2ca1-2657-4078-86c0-ed3af7559bc7" (UID: "c6cb2ca1-2657-4078-86c0-ed3af7559bc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.918146 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.918769 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhrql\" (UniqueName: \"kubernetes.io/projected/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-kube-api-access-bhrql\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:41 crc kubenswrapper[4861]: I0310 20:08:41.918788 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6cb2ca1-2657-4078-86c0-ed3af7559bc7-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.058657 4861 generic.go:334] "Generic (PLEG): container finished" podID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerID="4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e" exitCode=0 Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.058734 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerDied","Data":"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e"} Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.060916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n52z5" event={"ID":"c6cb2ca1-2657-4078-86c0-ed3af7559bc7","Type":"ContainerDied","Data":"6b71d73bd10d4babef7c2d70f8698cd68da641f5d1f842605982c0709834019a"} Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.058932 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n52z5" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.061000 4861 scope.go:117] "RemoveContainer" containerID="4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.111285 4861 scope.go:117] "RemoveContainer" containerID="a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.123054 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.132596 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n52z5"] Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.142393 4861 scope.go:117] "RemoveContainer" containerID="3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.175781 4861 scope.go:117] "RemoveContainer" containerID="4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e" Mar 10 20:08:42 crc kubenswrapper[4861]: E0310 20:08:42.176462 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e\": container with ID starting with 4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e not found: ID does not exist" containerID="4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.176556 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e"} err="failed to get container status \"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e\": rpc error: code = NotFound desc = could not find container \"4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e\": container with ID starting with 4c5086c4ffbc417977d45ce6378cc1085099fd019ba98abca1c580acd252175e not found: ID does not exist" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.176612 4861 scope.go:117] "RemoveContainer" containerID="a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3" Mar 10 20:08:42 crc kubenswrapper[4861]: E0310 20:08:42.177147 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3\": container with ID starting with a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3 not found: ID does not exist" containerID="a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.177213 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3"} err="failed to get container status \"a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3\": rpc error: code = NotFound desc = could not find container \"a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3\": container with ID starting with a29310602609244f991af646fb34d1572020d43cec3a94b18ebcd13ee0a23ff3 not found: ID does not exist" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.177256 4861 scope.go:117] "RemoveContainer" containerID="3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1" Mar 10 20:08:42 crc kubenswrapper[4861]: E0310 20:08:42.178089 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1\": container with ID starting with 3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1 not found: ID does not exist" containerID="3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.178156 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1"} err="failed to get container status \"3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1\": rpc error: code = NotFound desc = could not find container \"3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1\": container with ID starting with 3dd0341c3b8603432dc14e3f319740e0bc4f0bfe38b1c95842e19b7fd5e4a3a1 not found: ID does not exist" Mar 10 20:08:42 crc kubenswrapper[4861]: I0310 20:08:42.977372 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" path="/var/lib/kubelet/pods/c6cb2ca1-2657-4078-86c0-ed3af7559bc7/volumes" Mar 10 20:08:46 crc kubenswrapper[4861]: I0310 20:08:46.963605 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:08:46 crc kubenswrapper[4861]: E0310 20:08:46.964401 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:08:56 crc kubenswrapper[4861]: I0310 20:08:56.849062 4861 scope.go:117] "RemoveContainer" containerID="cb0cae0ad1ffb5c10b478127b094343fc95d478829deebefcdf21840be54bf67" Mar 10 20:08:57 crc kubenswrapper[4861]: I0310 20:08:57.958602 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:08:57 crc kubenswrapper[4861]: E0310 20:08:57.959543 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:09:09 crc kubenswrapper[4861]: I0310 20:09:09.958093 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:09:09 crc kubenswrapper[4861]: E0310 20:09:09.958905 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.602018 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:21 crc kubenswrapper[4861]: E0310 20:09:21.603092 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="registry-server" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.603115 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="registry-server" Mar 10 20:09:21 crc kubenswrapper[4861]: E0310 20:09:21.603160 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="extract-content" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.603175 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="extract-content" Mar 10 20:09:21 crc kubenswrapper[4861]: E0310 20:09:21.603194 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="extract-utilities" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.603207 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="extract-utilities" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.603508 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cb2ca1-2657-4078-86c0-ed3af7559bc7" containerName="registry-server" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.605195 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.626611 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.670635 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4cw\" (UniqueName: \"kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.670765 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.670822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.772876 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4cw\" (UniqueName: \"kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.772992 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.773025 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.773636 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.773779 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.797477 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4cw\" (UniqueName: \"kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw\") pod \"redhat-operators-mvljl\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:21 crc kubenswrapper[4861]: I0310 20:09:21.935371 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:22 crc kubenswrapper[4861]: I0310 20:09:22.367155 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:22 crc kubenswrapper[4861]: I0310 20:09:22.428291 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerStarted","Data":"7658f03b89a5b18d00e7873e22e841e7e6d33965fccee3ba929988aeb54d0e5a"} Mar 10 20:09:22 crc kubenswrapper[4861]: I0310 20:09:22.958757 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:09:22 crc kubenswrapper[4861]: E0310 20:09:22.959285 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:09:23 crc kubenswrapper[4861]: I0310 20:09:23.436924 4861 generic.go:334] "Generic (PLEG): container finished" podID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerID="f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c" exitCode=0 Mar 10 20:09:23 crc kubenswrapper[4861]: I0310 20:09:23.436976 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerDied","Data":"f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c"} Mar 10 20:09:25 crc kubenswrapper[4861]: I0310 20:09:25.462644 4861 generic.go:334] "Generic (PLEG): container finished" podID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerID="281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14" exitCode=0 Mar 10 20:09:25 crc kubenswrapper[4861]: I0310 20:09:25.462780 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerDied","Data":"281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14"} Mar 10 20:09:26 crc kubenswrapper[4861]: I0310 20:09:26.472134 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerStarted","Data":"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10"} Mar 10 20:09:26 crc kubenswrapper[4861]: I0310 20:09:26.497727 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mvljl" podStartSLOduration=2.8895185310000002 podStartE2EDuration="5.497677615s" podCreationTimestamp="2026-03-10 20:09:21 +0000 UTC" firstStartedPulling="2026-03-10 20:09:23.438530078 +0000 UTC m=+4907.201966078" lastFinishedPulling="2026-03-10 20:09:26.046689172 +0000 UTC m=+4909.810125162" observedRunningTime="2026-03-10 20:09:26.493302186 +0000 UTC m=+4910.256738146" watchObservedRunningTime="2026-03-10 20:09:26.497677615 +0000 UTC m=+4910.261113605" Mar 10 20:09:31 crc kubenswrapper[4861]: I0310 20:09:31.936424 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:31 crc kubenswrapper[4861]: I0310 20:09:31.936499 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:33 crc kubenswrapper[4861]: I0310 20:09:33.012360 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mvljl" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="registry-server" probeResult="failure" output=< Mar 10 20:09:33 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 20:09:33 crc kubenswrapper[4861]: > Mar 10 20:09:33 crc kubenswrapper[4861]: I0310 20:09:33.958805 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:09:33 crc kubenswrapper[4861]: E0310 20:09:33.959225 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:09:42 crc kubenswrapper[4861]: I0310 20:09:42.009680 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:42 crc kubenswrapper[4861]: I0310 20:09:42.062874 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:43 crc kubenswrapper[4861]: I0310 20:09:43.207769 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:43 crc kubenswrapper[4861]: I0310 20:09:43.640099 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mvljl" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="registry-server" containerID="cri-o://40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10" gracePeriod=2 Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.125819 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.241362 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4cw\" (UniqueName: \"kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw\") pod \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.242143 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities\") pod \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.242217 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content\") pod \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\" (UID: \"a9bb1611-e3c4-489e-94ab-a0206ba457a8\") " Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.244215 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities" (OuterVolumeSpecName: "utilities") pod "a9bb1611-e3c4-489e-94ab-a0206ba457a8" (UID: "a9bb1611-e3c4-489e-94ab-a0206ba457a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.244672 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.255068 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw" (OuterVolumeSpecName: "kube-api-access-kb4cw") pod "a9bb1611-e3c4-489e-94ab-a0206ba457a8" (UID: "a9bb1611-e3c4-489e-94ab-a0206ba457a8"). InnerVolumeSpecName "kube-api-access-kb4cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.345985 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4cw\" (UniqueName: \"kubernetes.io/projected/a9bb1611-e3c4-489e-94ab-a0206ba457a8-kube-api-access-kb4cw\") on node \"crc\" DevicePath \"\"" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.412657 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9bb1611-e3c4-489e-94ab-a0206ba457a8" (UID: "a9bb1611-e3c4-489e-94ab-a0206ba457a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.447099 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bb1611-e3c4-489e-94ab-a0206ba457a8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.657495 4861 generic.go:334] "Generic (PLEG): container finished" podID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerID="40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10" exitCode=0 Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.657579 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerDied","Data":"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10"} Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.657645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mvljl" event={"ID":"a9bb1611-e3c4-489e-94ab-a0206ba457a8","Type":"ContainerDied","Data":"7658f03b89a5b18d00e7873e22e841e7e6d33965fccee3ba929988aeb54d0e5a"} Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.657677 4861 scope.go:117] "RemoveContainer" containerID="40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.657996 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mvljl" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.702365 4861 scope.go:117] "RemoveContainer" containerID="281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.736913 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.741939 4861 scope.go:117] "RemoveContainer" containerID="f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.745418 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mvljl"] Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.765214 4861 scope.go:117] "RemoveContainer" containerID="40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10" Mar 10 20:09:44 crc kubenswrapper[4861]: E0310 20:09:44.765548 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10\": container with ID starting with 40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10 not found: ID does not exist" containerID="40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.765576 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10"} err="failed to get container status \"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10\": rpc error: code = NotFound desc = could not find container \"40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10\": container with ID starting with 40a16fe532ae68931eb9954d52ae86e2ee322fd237e103cbdb4ea17c249c0c10 not found: ID does not exist" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.765596 4861 scope.go:117] "RemoveContainer" containerID="281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14" Mar 10 20:09:44 crc kubenswrapper[4861]: E0310 20:09:44.765886 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14\": container with ID starting with 281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14 not found: ID does not exist" containerID="281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.765935 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14"} err="failed to get container status \"281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14\": rpc error: code = NotFound desc = could not find container \"281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14\": container with ID starting with 281aac409e51015f77c9e918c42980ca3e2c8ba07db77723d672e570a89c8d14 not found: ID does not exist" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.765950 4861 scope.go:117] "RemoveContainer" containerID="f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c" Mar 10 20:09:44 crc kubenswrapper[4861]: E0310 20:09:44.766327 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c\": container with ID starting with f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c not found: ID does not exist" containerID="f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.766346 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c"} err="failed to get container status \"f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c\": rpc error: code = NotFound desc = could not find container \"f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c\": container with ID starting with f7711e6ee4de35bdc23ed2c4490ca6e0a57647be030ab88ba2159877f4cceb1c not found: ID does not exist" Mar 10 20:09:44 crc kubenswrapper[4861]: I0310 20:09:44.973832 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" path="/var/lib/kubelet/pods/a9bb1611-e3c4-489e-94ab-a0206ba457a8/volumes" Mar 10 20:09:46 crc kubenswrapper[4861]: I0310 20:09:46.963029 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:09:46 crc kubenswrapper[4861]: E0310 20:09:46.964193 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.172031 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552890-zcwwr"] Mar 10 20:10:00 crc kubenswrapper[4861]: E0310 20:10:00.173047 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="extract-content" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.173068 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="extract-content" Mar 10 20:10:00 crc kubenswrapper[4861]: E0310 20:10:00.173110 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="registry-server" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.173125 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="registry-server" Mar 10 20:10:00 crc kubenswrapper[4861]: E0310 20:10:00.173159 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="extract-utilities" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.173174 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="extract-utilities" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.173409 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bb1611-e3c4-489e-94ab-a0206ba457a8" containerName="registry-server" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.174074 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.177660 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.178560 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.186478 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552890-zcwwr"] Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.229663 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.230241 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnn5\" (UniqueName: \"kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5\") pod \"auto-csr-approver-29552890-zcwwr\" (UID: \"e0684c6e-8f32-4e42-bad1-b4014f02a12d\") " pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.332444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnn5\" (UniqueName: \"kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5\") pod \"auto-csr-approver-29552890-zcwwr\" (UID: \"e0684c6e-8f32-4e42-bad1-b4014f02a12d\") " pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.363634 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnn5\" (UniqueName: \"kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5\") pod \"auto-csr-approver-29552890-zcwwr\" (UID: \"e0684c6e-8f32-4e42-bad1-b4014f02a12d\") " pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.549933 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:00 crc kubenswrapper[4861]: I0310 20:10:00.863234 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552890-zcwwr"] Mar 10 20:10:01 crc kubenswrapper[4861]: I0310 20:10:01.820471 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" event={"ID":"e0684c6e-8f32-4e42-bad1-b4014f02a12d","Type":"ContainerStarted","Data":"4f7a9765c064f92d7e85021e069982f6091377175a42097bbd4e6ff87e3bfaa3"} Mar 10 20:10:01 crc kubenswrapper[4861]: I0310 20:10:01.958143 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:10:01 crc kubenswrapper[4861]: E0310 20:10:01.958947 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:10:02 crc kubenswrapper[4861]: I0310 20:10:02.826451 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" event={"ID":"e0684c6e-8f32-4e42-bad1-b4014f02a12d","Type":"ContainerStarted","Data":"af1684d1783b9e105b959d5e5e85496c9aa22b0ed33724e09b6813e9e32acfe3"} Mar 10 20:10:02 crc kubenswrapper[4861]: I0310 20:10:02.849480 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" podStartSLOduration=1.474850928 podStartE2EDuration="2.849466405s" podCreationTimestamp="2026-03-10 20:10:00 +0000 UTC" firstStartedPulling="2026-03-10 20:10:00.870318537 +0000 UTC m=+4944.633754497" lastFinishedPulling="2026-03-10 20:10:02.244933984 +0000 UTC m=+4946.008369974" observedRunningTime="2026-03-10 20:10:02.847579573 +0000 UTC m=+4946.611015533" watchObservedRunningTime="2026-03-10 20:10:02.849466405 +0000 UTC m=+4946.612902365" Mar 10 20:10:03 crc kubenswrapper[4861]: I0310 20:10:03.838093 4861 generic.go:334] "Generic (PLEG): container finished" podID="e0684c6e-8f32-4e42-bad1-b4014f02a12d" containerID="af1684d1783b9e105b959d5e5e85496c9aa22b0ed33724e09b6813e9e32acfe3" exitCode=0 Mar 10 20:10:03 crc kubenswrapper[4861]: I0310 20:10:03.838157 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" event={"ID":"e0684c6e-8f32-4e42-bad1-b4014f02a12d","Type":"ContainerDied","Data":"af1684d1783b9e105b959d5e5e85496c9aa22b0ed33724e09b6813e9e32acfe3"} Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.236339 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.425677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbnn5\" (UniqueName: \"kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5\") pod \"e0684c6e-8f32-4e42-bad1-b4014f02a12d\" (UID: \"e0684c6e-8f32-4e42-bad1-b4014f02a12d\") " Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.437904 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5" (OuterVolumeSpecName: "kube-api-access-kbnn5") pod "e0684c6e-8f32-4e42-bad1-b4014f02a12d" (UID: "e0684c6e-8f32-4e42-bad1-b4014f02a12d"). InnerVolumeSpecName "kube-api-access-kbnn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.528771 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbnn5\" (UniqueName: \"kubernetes.io/projected/e0684c6e-8f32-4e42-bad1-b4014f02a12d-kube-api-access-kbnn5\") on node \"crc\" DevicePath \"\"" Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.857499 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" event={"ID":"e0684c6e-8f32-4e42-bad1-b4014f02a12d","Type":"ContainerDied","Data":"4f7a9765c064f92d7e85021e069982f6091377175a42097bbd4e6ff87e3bfaa3"} Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.857556 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7a9765c064f92d7e85021e069982f6091377175a42097bbd4e6ff87e3bfaa3" Mar 10 20:10:05 crc kubenswrapper[4861]: I0310 20:10:05.857628 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552890-zcwwr" Mar 10 20:10:06 crc kubenswrapper[4861]: I0310 20:10:06.336244 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552884-qzfsx"] Mar 10 20:10:06 crc kubenswrapper[4861]: I0310 20:10:06.342549 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552884-qzfsx"] Mar 10 20:10:06 crc kubenswrapper[4861]: I0310 20:10:06.972276 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99b5c29d-c540-4a03-b6d3-369340c81359" path="/var/lib/kubelet/pods/99b5c29d-c540-4a03-b6d3-369340c81359/volumes" Mar 10 20:10:15 crc kubenswrapper[4861]: I0310 20:10:15.959104 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:10:15 crc kubenswrapper[4861]: E0310 20:10:15.961186 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:10:29 crc kubenswrapper[4861]: I0310 20:10:29.958211 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:10:31 crc kubenswrapper[4861]: I0310 20:10:31.091733 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f"} Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.263165 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-jptgj"] Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.277915 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-jptgj"] Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.398211 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-qchxm"] Mar 10 20:10:49 crc kubenswrapper[4861]: E0310 20:10:49.398702 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0684c6e-8f32-4e42-bad1-b4014f02a12d" containerName="oc" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.398764 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0684c6e-8f32-4e42-bad1-b4014f02a12d" containerName="oc" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.399007 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0684c6e-8f32-4e42-bad1-b4014f02a12d" containerName="oc" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.399682 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.403461 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.403768 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2l2g6" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.404185 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.404258 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.417285 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-qchxm"] Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.541427 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.541776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.542054 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lsr\" (UniqueName: \"kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.642983 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.643128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lsr\" (UniqueName: \"kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.643192 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.643589 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.644328 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.676126 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lsr\" (UniqueName: \"kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr\") pod \"crc-storage-crc-qchxm\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:49 crc kubenswrapper[4861]: I0310 20:10:49.756009 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:50 crc kubenswrapper[4861]: I0310 20:10:50.285000 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-qchxm"] Mar 10 20:10:50 crc kubenswrapper[4861]: W0310 20:10:50.297454 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod179aa84f_ab93_4763_aee9_05e0c7a50aa5.slice/crio-8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff WatchSource:0}: Error finding container 8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff: Status 404 returned error can't find the container with id 8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff Mar 10 20:10:50 crc kubenswrapper[4861]: I0310 20:10:50.970570 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a196210-c92a-4c67-9441-9ffa06be32be" path="/var/lib/kubelet/pods/0a196210-c92a-4c67-9441-9ffa06be32be/volumes" Mar 10 20:10:51 crc kubenswrapper[4861]: I0310 20:10:51.306180 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qchxm" event={"ID":"179aa84f-ab93-4763-aee9-05e0c7a50aa5","Type":"ContainerStarted","Data":"8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff"} Mar 10 20:10:52 crc kubenswrapper[4861]: I0310 20:10:52.318231 4861 generic.go:334] "Generic (PLEG): container finished" podID="179aa84f-ab93-4763-aee9-05e0c7a50aa5" containerID="7c6b6e251055447fe35a44135e97cf62ba19e29b8e300773118daed6903c4694" exitCode=0 Mar 10 20:10:52 crc kubenswrapper[4861]: I0310 20:10:52.318343 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qchxm" event={"ID":"179aa84f-ab93-4763-aee9-05e0c7a50aa5","Type":"ContainerDied","Data":"7c6b6e251055447fe35a44135e97cf62ba19e29b8e300773118daed6903c4694"} Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.698636 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.823164 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9lsr\" (UniqueName: \"kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr\") pod \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.823232 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt\") pod \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.823338 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage\") pod \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\" (UID: \"179aa84f-ab93-4763-aee9-05e0c7a50aa5\") " Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.823941 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "179aa84f-ab93-4763-aee9-05e0c7a50aa5" (UID: "179aa84f-ab93-4763-aee9-05e0c7a50aa5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.832041 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr" (OuterVolumeSpecName: "kube-api-access-x9lsr") pod "179aa84f-ab93-4763-aee9-05e0c7a50aa5" (UID: "179aa84f-ab93-4763-aee9-05e0c7a50aa5"). InnerVolumeSpecName "kube-api-access-x9lsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.855091 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "179aa84f-ab93-4763-aee9-05e0c7a50aa5" (UID: "179aa84f-ab93-4763-aee9-05e0c7a50aa5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.925371 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9lsr\" (UniqueName: \"kubernetes.io/projected/179aa84f-ab93-4763-aee9-05e0c7a50aa5-kube-api-access-x9lsr\") on node \"crc\" DevicePath \"\"" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.925479 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/179aa84f-ab93-4763-aee9-05e0c7a50aa5-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 10 20:10:53 crc kubenswrapper[4861]: I0310 20:10:53.925504 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/179aa84f-ab93-4763-aee9-05e0c7a50aa5-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 10 20:10:54 crc kubenswrapper[4861]: I0310 20:10:54.347497 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-qchxm" event={"ID":"179aa84f-ab93-4763-aee9-05e0c7a50aa5","Type":"ContainerDied","Data":"8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff"} Mar 10 20:10:54 crc kubenswrapper[4861]: I0310 20:10:54.347554 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8392756d5dab68e44757c07b9a622a502073f4bbd2805b78cc36e4398af0e5ff" Mar 10 20:10:54 crc kubenswrapper[4861]: I0310 20:10:54.347584 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-qchxm" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.207669 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-qchxm"] Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.217285 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-qchxm"] Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.320347 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2ctqp"] Mar 10 20:10:56 crc kubenswrapper[4861]: E0310 20:10:56.320859 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179aa84f-ab93-4763-aee9-05e0c7a50aa5" containerName="storage" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.320890 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="179aa84f-ab93-4763-aee9-05e0c7a50aa5" containerName="storage" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.321220 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="179aa84f-ab93-4763-aee9-05e0c7a50aa5" containerName="storage" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.322131 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.326884 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.328089 4861 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2l2g6" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.328201 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.329474 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.337148 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2ctqp"] Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.472878 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.472941 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.472991 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hwn\" (UniqueName: \"kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.575226 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.575358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.575442 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hwn\" (UniqueName: \"kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.575603 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.576695 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.606692 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hwn\" (UniqueName: \"kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn\") pod \"crc-storage-crc-2ctqp\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.699759 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:10:56 crc kubenswrapper[4861]: I0310 20:10:56.976012 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179aa84f-ab93-4763-aee9-05e0c7a50aa5" path="/var/lib/kubelet/pods/179aa84f-ab93-4763-aee9-05e0c7a50aa5/volumes" Mar 10 20:10:57 crc kubenswrapper[4861]: I0310 20:10:57.004657 4861 scope.go:117] "RemoveContainer" containerID="79b66d70f838551cd15f4213fcdae99951625b0358a9c05f9e04cfe1faf7dc08" Mar 10 20:10:57 crc kubenswrapper[4861]: I0310 20:10:57.009151 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2ctqp"] Mar 10 20:10:57 crc kubenswrapper[4861]: I0310 20:10:57.057171 4861 scope.go:117] "RemoveContainer" containerID="5ad05c468f856e454f51904b827c281cab64cd11815768f5170a3f8fb8952fb9" Mar 10 20:10:57 crc kubenswrapper[4861]: I0310 20:10:57.393645 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ctqp" event={"ID":"69531f11-6187-472f-985d-5c3baa125800","Type":"ContainerStarted","Data":"cc0b58e4c2010bf5936525822cf446ee68d969fd5d7fa2de0555e868b955c024"} Mar 10 20:10:59 crc kubenswrapper[4861]: I0310 20:10:59.420827 4861 generic.go:334] "Generic (PLEG): container finished" podID="69531f11-6187-472f-985d-5c3baa125800" containerID="baf5141460a6fe61807e289ad5beebe216adaae8c89d1134d52ee35280d1fbb6" exitCode=0 Mar 10 20:10:59 crc kubenswrapper[4861]: I0310 20:10:59.420912 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ctqp" event={"ID":"69531f11-6187-472f-985d-5c3baa125800","Type":"ContainerDied","Data":"baf5141460a6fe61807e289ad5beebe216adaae8c89d1134d52ee35280d1fbb6"} Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.835789 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.948248 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage\") pod \"69531f11-6187-472f-985d-5c3baa125800\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.948549 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78hwn\" (UniqueName: \"kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn\") pod \"69531f11-6187-472f-985d-5c3baa125800\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.948599 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt\") pod \"69531f11-6187-472f-985d-5c3baa125800\" (UID: \"69531f11-6187-472f-985d-5c3baa125800\") " Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.948932 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "69531f11-6187-472f-985d-5c3baa125800" (UID: "69531f11-6187-472f-985d-5c3baa125800"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.957878 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn" (OuterVolumeSpecName: "kube-api-access-78hwn") pod "69531f11-6187-472f-985d-5c3baa125800" (UID: "69531f11-6187-472f-985d-5c3baa125800"). InnerVolumeSpecName "kube-api-access-78hwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:11:00 crc kubenswrapper[4861]: I0310 20:11:00.980853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "69531f11-6187-472f-985d-5c3baa125800" (UID: "69531f11-6187-472f-985d-5c3baa125800"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.050429 4861 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/69531f11-6187-472f-985d-5c3baa125800-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.050476 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78hwn\" (UniqueName: \"kubernetes.io/projected/69531f11-6187-472f-985d-5c3baa125800-kube-api-access-78hwn\") on node \"crc\" DevicePath \"\"" Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.050498 4861 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/69531f11-6187-472f-985d-5c3baa125800-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.442200 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2ctqp" event={"ID":"69531f11-6187-472f-985d-5c3baa125800","Type":"ContainerDied","Data":"cc0b58e4c2010bf5936525822cf446ee68d969fd5d7fa2de0555e868b955c024"} Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.442251 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2ctqp" Mar 10 20:11:01 crc kubenswrapper[4861]: I0310 20:11:01.442266 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc0b58e4c2010bf5936525822cf446ee68d969fd5d7fa2de0555e868b955c024" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.166496 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552892-c5kpk"] Mar 10 20:12:00 crc kubenswrapper[4861]: E0310 20:12:00.168009 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69531f11-6187-472f-985d-5c3baa125800" containerName="storage" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.168034 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="69531f11-6187-472f-985d-5c3baa125800" containerName="storage" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.168265 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="69531f11-6187-472f-985d-5c3baa125800" containerName="storage" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.168915 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.172775 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.173559 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.173870 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.176013 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552892-c5kpk"] Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.318056 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtz9x\" (UniqueName: \"kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x\") pod \"auto-csr-approver-29552892-c5kpk\" (UID: \"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0\") " pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.419590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtz9x\" (UniqueName: \"kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x\") pod \"auto-csr-approver-29552892-c5kpk\" (UID: \"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0\") " pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.446262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtz9x\" (UniqueName: \"kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x\") pod \"auto-csr-approver-29552892-c5kpk\" (UID: \"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0\") " pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:00 crc kubenswrapper[4861]: I0310 20:12:00.519185 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:01 crc kubenswrapper[4861]: I0310 20:12:01.018438 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552892-c5kpk"] Mar 10 20:12:02 crc kubenswrapper[4861]: I0310 20:12:02.048550 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" event={"ID":"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0","Type":"ContainerStarted","Data":"4e820992e1de5daed99ba3629192a46e157030fb4faa709354f8ef23f07d4b18"} Mar 10 20:12:03 crc kubenswrapper[4861]: I0310 20:12:03.061327 4861 generic.go:334] "Generic (PLEG): container finished" podID="fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" containerID="851f789b57a61b9ef74c21f4bde4fdef2aaf5b5e25f18ae080512c61e60a3572" exitCode=0 Mar 10 20:12:03 crc kubenswrapper[4861]: I0310 20:12:03.061433 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" event={"ID":"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0","Type":"ContainerDied","Data":"851f789b57a61b9ef74c21f4bde4fdef2aaf5b5e25f18ae080512c61e60a3572"} Mar 10 20:12:04 crc kubenswrapper[4861]: I0310 20:12:04.491500 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:04 crc kubenswrapper[4861]: I0310 20:12:04.603011 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtz9x\" (UniqueName: \"kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x\") pod \"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0\" (UID: \"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0\") " Mar 10 20:12:04 crc kubenswrapper[4861]: I0310 20:12:04.612396 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x" (OuterVolumeSpecName: "kube-api-access-xtz9x") pod "fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" (UID: "fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0"). InnerVolumeSpecName "kube-api-access-xtz9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:12:04 crc kubenswrapper[4861]: I0310 20:12:04.704932 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtz9x\" (UniqueName: \"kubernetes.io/projected/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0-kube-api-access-xtz9x\") on node \"crc\" DevicePath \"\"" Mar 10 20:12:05 crc kubenswrapper[4861]: I0310 20:12:05.084424 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" event={"ID":"fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0","Type":"ContainerDied","Data":"4e820992e1de5daed99ba3629192a46e157030fb4faa709354f8ef23f07d4b18"} Mar 10 20:12:05 crc kubenswrapper[4861]: I0310 20:12:05.084485 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e820992e1de5daed99ba3629192a46e157030fb4faa709354f8ef23f07d4b18" Mar 10 20:12:05 crc kubenswrapper[4861]: I0310 20:12:05.084561 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552892-c5kpk" Mar 10 20:12:05 crc kubenswrapper[4861]: I0310 20:12:05.604158 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552886-sqfsv"] Mar 10 20:12:05 crc kubenswrapper[4861]: I0310 20:12:05.613258 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552886-sqfsv"] Mar 10 20:12:06 crc kubenswrapper[4861]: I0310 20:12:06.975628 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a6e34c-985b-4caa-9a5e-0fdbcde5c0da" path="/var/lib/kubelet/pods/58a6e34c-985b-4caa-9a5e-0fdbcde5c0da/volumes" Mar 10 20:12:51 crc kubenswrapper[4861]: I0310 20:12:51.991843 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:12:51 crc kubenswrapper[4861]: I0310 20:12:51.992517 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:12:57 crc kubenswrapper[4861]: I0310 20:12:57.180630 4861 scope.go:117] "RemoveContainer" containerID="270c69547db9be7067c9fe9fbeb9df29e0cb05209196cbf14fd552f71a485960" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.897597 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:02 crc kubenswrapper[4861]: E0310 20:13:02.898166 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" containerName="oc" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.898180 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" containerName="oc" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.898312 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" containerName="oc" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.899075 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.903176 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.903837 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.903914 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pvmrj" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.903954 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.903957 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.904574 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.907054 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.920948 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:02 crc kubenswrapper[4861]: I0310 20:13:02.928360 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.061335 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.061412 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sph\" (UniqueName: \"kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.061453 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.061475 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lwn\" (UniqueName: \"kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.061537 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.163093 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.163363 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lwn\" (UniqueName: \"kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.163466 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.163558 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.163663 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7sph\" (UniqueName: \"kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.164110 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.164512 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.164664 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.195162 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lwn\" (UniqueName: \"kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn\") pod \"dnsmasq-dns-55c76fd6b7-7qqk7\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.199969 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.200443 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.215032 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7sph\" (UniqueName: \"kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph\") pod \"dnsmasq-dns-c44667757-9pdjw\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.215383 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.234657 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.236654 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.253033 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.366779 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.367081 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62j6m\" (UniqueName: \"kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.367117 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.471317 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62j6m\" (UniqueName: \"kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.471384 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.471450 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.472302 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.473823 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.500476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62j6m\" (UniqueName: \"kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m\") pod \"dnsmasq-dns-5fb77f9685-qxpzs\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.524652 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.549270 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.550553 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.562414 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.610385 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.674119 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.674169 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txhjk\" (UniqueName: \"kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.674532 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.751959 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.757220 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.775339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.775376 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.775402 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txhjk\" (UniqueName: \"kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.776741 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.776933 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.790142 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txhjk\" (UniqueName: \"kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk\") pod \"dnsmasq-dns-ff89b6977-wwftj\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:03 crc kubenswrapper[4861]: I0310 20:13:03.873166 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.080215 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:04 crc kubenswrapper[4861]: W0310 20:13:04.094364 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb28c3324_39f9_4a5f_b120_d20a44a48a9c.slice/crio-0aa7b38e38e64b2dfc0e50dc59e0246837675cde68fae545498192f1dcaebe75 WatchSource:0}: Error finding container 0aa7b38e38e64b2dfc0e50dc59e0246837675cde68fae545498192f1dcaebe75: Status 404 returned error can't find the container with id 0aa7b38e38e64b2dfc0e50dc59e0246837675cde68fae545498192f1dcaebe75 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.283604 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:13:04 crc kubenswrapper[4861]: W0310 20:13:04.297438 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31f9d3df_2233_452c_835c_2da6c3ffb61e.slice/crio-b799a14c4d62b4884c156fad65475eedf8749aaab8dcfec24a54e2dc81ae89a5 WatchSource:0}: Error finding container b799a14c4d62b4884c156fad65475eedf8749aaab8dcfec24a54e2dc81ae89a5: Status 404 returned error can't find the container with id b799a14c4d62b4884c156fad65475eedf8749aaab8dcfec24a54e2dc81ae89a5 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.420765 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.421823 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.426146 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.426501 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.426772 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.426928 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.427006 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.427009 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.427345 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k86vm" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.438509 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.587452 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.587822 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.587852 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.587884 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzzw\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.587970 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588000 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588032 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588057 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588087 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.588159 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.692377 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.693390 4861 generic.go:334] "Generic (PLEG): container finished" podID="14c26f8a-53c9-457c-b4fc-28b55cdedc9a" containerID="7d84ad25fe8c63fd22f9530440f14c40828f0ba32c15bfbe90d2b22fb0695dac" exitCode=0 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.695122 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-9pdjw" event={"ID":"14c26f8a-53c9-457c-b4fc-28b55cdedc9a","Type":"ContainerDied","Data":"7d84ad25fe8c63fd22f9530440f14c40828f0ba32c15bfbe90d2b22fb0695dac"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.695150 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-9pdjw" event={"ID":"14c26f8a-53c9-457c-b4fc-28b55cdedc9a","Type":"ContainerStarted","Data":"42687f69a9420384e3047ec5eb3f835c5f58e9388e1cc53f9a2a68677aace242"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.695220 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.695318 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.695398 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699047 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699096 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699122 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699138 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699171 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699190 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699212 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699258 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzzw\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.699757 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.700627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.701134 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.704125 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.704384 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.704558 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.704748 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.704909 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.705081 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.705372 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zhvjm" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.716298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" event={"ID":"b28c3324-39f9-4a5f-b120-d20a44a48a9c","Type":"ContainerDied","Data":"0cd8ebcb900167d93cea5b742f467078d15681a421efb4943383d6c443af30c7"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.716519 4861 generic.go:334] "Generic (PLEG): container finished" podID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerID="0cd8ebcb900167d93cea5b742f467078d15681a421efb4943383d6c443af30c7" exitCode=0 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.716616 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" event={"ID":"b28c3324-39f9-4a5f-b120-d20a44a48a9c","Type":"ContainerStarted","Data":"0aa7b38e38e64b2dfc0e50dc59e0246837675cde68fae545498192f1dcaebe75"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.717841 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.719346 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.719417 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.726175 4861 generic.go:334] "Generic (PLEG): container finished" podID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerID="c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922" exitCode=0 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.726245 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" event={"ID":"31f9d3df-2233-452c-835c-2da6c3ffb61e","Type":"ContainerDied","Data":"c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.726278 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" event={"ID":"31f9d3df-2233-452c-835c-2da6c3ffb61e","Type":"ContainerStarted","Data":"b799a14c4d62b4884c156fad65475eedf8749aaab8dcfec24a54e2dc81ae89a5"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.728229 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.738522 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.738566 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d88da1940136618f2f85840ee5cac31587a56be4a4528ff96ecb79185fdc0bfd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.750533 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.782583 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzzw\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.783785 4861 generic.go:334] "Generic (PLEG): container finished" podID="b22af4e8-0be9-4dff-9979-bf8f72d3f017" containerID="882306c5d626eb69aa339a80b43182497deb9c4fd69b25de6250c0eb09c2dee4" exitCode=0 Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.783820 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" event={"ID":"b22af4e8-0be9-4dff-9979-bf8f72d3f017","Type":"ContainerDied","Data":"882306c5d626eb69aa339a80b43182497deb9c4fd69b25de6250c0eb09c2dee4"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.783843 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" event={"ID":"b22af4e8-0be9-4dff-9979-bf8f72d3f017","Type":"ContainerStarted","Data":"c04395822994311a3d82af9dd6dcff90d565c1c74ec5520283e493dc23d75080"} Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.800771 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801478 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801507 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801545 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801563 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801592 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4zf\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801609 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801626 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801642 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801686 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.801747 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907503 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907543 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907583 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907624 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907651 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907680 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4zf\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907702 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907732 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907750 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907782 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.907816 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.908560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.908658 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.908817 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.909577 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.912524 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.914587 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.915299 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.915319 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ecf906be4f2ad685f642f2ece948ef22761add8f5cb32610f54e9cfa66403aa/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.920386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.920770 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.925875 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.929391 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.932588 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4zf\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:04 crc kubenswrapper[4861]: I0310 20:13:04.964661 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " pod="openstack/rabbitmq-server-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.051303 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.063224 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.109491 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config\") pod \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.109544 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7sph\" (UniqueName: \"kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph\") pod \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\" (UID: \"14c26f8a-53c9-457c-b4fc-28b55cdedc9a\") " Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.112978 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph" (OuterVolumeSpecName: "kube-api-access-j7sph") pod "14c26f8a-53c9-457c-b4fc-28b55cdedc9a" (UID: "14c26f8a-53c9-457c-b4fc-28b55cdedc9a"). InnerVolumeSpecName "kube-api-access-j7sph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.129391 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config" (OuterVolumeSpecName: "config") pod "14c26f8a-53c9-457c-b4fc-28b55cdedc9a" (UID: "14c26f8a-53c9-457c-b4fc-28b55cdedc9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.138721 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.154467 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.211385 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc\") pod \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.211716 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6lwn\" (UniqueName: \"kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn\") pod \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.211773 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config\") pod \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\" (UID: \"b22af4e8-0be9-4dff-9979-bf8f72d3f017\") " Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.212040 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.212057 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7sph\" (UniqueName: \"kubernetes.io/projected/14c26f8a-53c9-457c-b4fc-28b55cdedc9a-kube-api-access-j7sph\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.216558 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn" (OuterVolumeSpecName: "kube-api-access-q6lwn") pod "b22af4e8-0be9-4dff-9979-bf8f72d3f017" (UID: "b22af4e8-0be9-4dff-9979-bf8f72d3f017"). InnerVolumeSpecName "kube-api-access-q6lwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.229443 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config" (OuterVolumeSpecName: "config") pod "b22af4e8-0be9-4dff-9979-bf8f72d3f017" (UID: "b22af4e8-0be9-4dff-9979-bf8f72d3f017"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.233414 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b22af4e8-0be9-4dff-9979-bf8f72d3f017" (UID: "b22af4e8-0be9-4dff-9979-bf8f72d3f017"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.278394 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:13:05 crc kubenswrapper[4861]: W0310 20:13:05.289100 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaacc843_cac9_4039_a082_56a2fd9391f9.slice/crio-faacdf17c0b3c86b0cc3fa18ee549e644826179529f137d7a208d609ff90c23f WatchSource:0}: Error finding container faacdf17c0b3c86b0cc3fa18ee549e644826179529f137d7a208d609ff90c23f: Status 404 returned error can't find the container with id faacdf17c0b3c86b0cc3fa18ee549e644826179529f137d7a208d609ff90c23f Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.313242 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6lwn\" (UniqueName: \"kubernetes.io/projected/b22af4e8-0be9-4dff-9979-bf8f72d3f017-kube-api-access-q6lwn\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.313269 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.313281 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b22af4e8-0be9-4dff-9979-bf8f72d3f017-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.622131 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:13:05 crc kubenswrapper[4861]: W0310 20:13:05.632918 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod435b131b_9c41_4320_9bf9_47bdb8011c07.slice/crio-82e3530046c8422a29d6f4cdf19dbcb9121b379c0b31e834f402769fa32fe4a8 WatchSource:0}: Error finding container 82e3530046c8422a29d6f4cdf19dbcb9121b379c0b31e834f402769fa32fe4a8: Status 404 returned error can't find the container with id 82e3530046c8422a29d6f4cdf19dbcb9121b379c0b31e834f402769fa32fe4a8 Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.647335 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 10 20:13:05 crc kubenswrapper[4861]: E0310 20:13:05.647749 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22af4e8-0be9-4dff-9979-bf8f72d3f017" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.647771 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22af4e8-0be9-4dff-9979-bf8f72d3f017" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: E0310 20:13:05.647797 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c26f8a-53c9-457c-b4fc-28b55cdedc9a" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.647808 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c26f8a-53c9-457c-b4fc-28b55cdedc9a" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.648003 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c26f8a-53c9-457c-b4fc-28b55cdedc9a" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.648029 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22af4e8-0be9-4dff-9979-bf8f72d3f017" containerName="init" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.648946 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.650601 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.651668 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t596r" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.652002 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.652385 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.652726 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.666235 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719244 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719414 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719553 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719617 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719843 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-859df\" (UniqueName: \"kubernetes.io/projected/2b59736f-62c4-499c-abad-c9ab4705c2ed-kube-api-access-859df\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.719934 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.720082 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.720204 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.800748 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" event={"ID":"31f9d3df-2233-452c-835c-2da6c3ffb61e","Type":"ContainerStarted","Data":"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.801799 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.805020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerStarted","Data":"82e3530046c8422a29d6f4cdf19dbcb9121b379c0b31e834f402769fa32fe4a8"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.807406 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.807398 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-7qqk7" event={"ID":"b22af4e8-0be9-4dff-9979-bf8f72d3f017","Type":"ContainerDied","Data":"c04395822994311a3d82af9dd6dcff90d565c1c74ec5520283e493dc23d75080"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.807653 4861 scope.go:117] "RemoveContainer" containerID="882306c5d626eb69aa339a80b43182497deb9c4fd69b25de6250c0eb09c2dee4" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.809821 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-9pdjw" event={"ID":"14c26f8a-53c9-457c-b4fc-28b55cdedc9a","Type":"ContainerDied","Data":"42687f69a9420384e3047ec5eb3f835c5f58e9388e1cc53f9a2a68677aace242"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.809875 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-9pdjw" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.812456 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerStarted","Data":"faacdf17c0b3c86b0cc3fa18ee549e644826179529f137d7a208d609ff90c23f"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.815571 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" event={"ID":"b28c3324-39f9-4a5f-b120-d20a44a48a9c","Type":"ContainerStarted","Data":"a10788e5dd867d0178a5072b0a2e31b8ab8a1a5de11eefffa3d1565e1ac726ed"} Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.815738 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821214 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821280 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821314 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-859df\" (UniqueName: \"kubernetes.io/projected/2b59736f-62c4-499c-abad-c9ab4705c2ed-kube-api-access-859df\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821339 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821388 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821421 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.821445 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.823074 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.823513 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.823550 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b59736f-62c4-499c-abad-c9ab4705c2ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.823876 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b59736f-62c4-499c-abad-c9ab4705c2ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.826240 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.826271 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c3e262f920b33a2f75564e0431d1721db3846ef49d37b518e685e6a08b7d99e7/globalmount\"" pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.828476 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.837538 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" podStartSLOduration=2.8375221550000003 podStartE2EDuration="2.837522155s" podCreationTimestamp="2026-03-10 20:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:05.834801661 +0000 UTC m=+5129.598237661" watchObservedRunningTime="2026-03-10 20:13:05.837522155 +0000 UTC m=+5129.600958115" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.840069 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b59736f-62c4-499c-abad-c9ab4705c2ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.844094 4861 scope.go:117] "RemoveContainer" containerID="7d84ad25fe8c63fd22f9530440f14c40828f0ba32c15bfbe90d2b22fb0695dac" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.855515 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-859df\" (UniqueName: \"kubernetes.io/projected/2b59736f-62c4-499c-abad-c9ab4705c2ed-kube-api-access-859df\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.876965 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" podStartSLOduration=2.876943178 podStartE2EDuration="2.876943178s" podCreationTimestamp="2026-03-10 20:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:05.871019717 +0000 UTC m=+5129.634455687" watchObservedRunningTime="2026-03-10 20:13:05.876943178 +0000 UTC m=+5129.640379148" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.907837 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.914015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bc97f1c-9fcb-4467-8b82-574ed6a3aa56\") pod \"openstack-galera-0\" (UID: \"2b59736f-62c4-499c-abad-c9ab4705c2ed\") " pod="openstack/openstack-galera-0" Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.915477 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c44667757-9pdjw"] Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.951101 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.954229 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-7qqk7"] Mar 10 20:13:05 crc kubenswrapper[4861]: I0310 20:13:05.968510 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.443288 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 10 20:13:06 crc kubenswrapper[4861]: W0310 20:13:06.447391 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b59736f_62c4_499c_abad_c9ab4705c2ed.slice/crio-7d4a79e5297369640b2f1a285b70e4f1fdbceb7ea2b7995ad9e889ee30a25e4a WatchSource:0}: Error finding container 7d4a79e5297369640b2f1a285b70e4f1fdbceb7ea2b7995ad9e889ee30a25e4a: Status 404 returned error can't find the container with id 7d4a79e5297369640b2f1a285b70e4f1fdbceb7ea2b7995ad9e889ee30a25e4a Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.830587 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2b59736f-62c4-499c-abad-c9ab4705c2ed","Type":"ContainerStarted","Data":"bae83ca78af669aa64f1c7fa25966a6f58e4657d0c14c7eb4b896cfc8a518051"} Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.830968 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2b59736f-62c4-499c-abad-c9ab4705c2ed","Type":"ContainerStarted","Data":"7d4a79e5297369640b2f1a285b70e4f1fdbceb7ea2b7995ad9e889ee30a25e4a"} Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.836595 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerStarted","Data":"a1f2fa374fcde2b1a6400111146c13b03076a5c88c4417be4d60516309c63112"} Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.838022 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerStarted","Data":"7ae1e23c33c5be7e877fcdb77d99857c6890d4835a90260c0555fc46e0fe0878"} Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.965698 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c26f8a-53c9-457c-b4fc-28b55cdedc9a" path="/var/lib/kubelet/pods/14c26f8a-53c9-457c-b4fc-28b55cdedc9a/volumes" Mar 10 20:13:06 crc kubenswrapper[4861]: I0310 20:13:06.966232 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22af4e8-0be9-4dff-9979-bf8f72d3f017" path="/var/lib/kubelet/pods/b22af4e8-0be9-4dff-9979-bf8f72d3f017/volumes" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.111079 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.112189 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.114467 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.114468 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cjz5n" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.116435 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.119983 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.131647 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149039 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149118 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149200 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149249 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149369 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149456 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.149550 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxt6n\" (UniqueName: \"kubernetes.io/projected/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kube-api-access-xxt6n\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.251957 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252066 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252117 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252161 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxt6n\" (UniqueName: \"kubernetes.io/projected/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kube-api-access-xxt6n\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252285 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252337 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252374 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.252419 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.253881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.254337 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.256460 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.258382 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.259331 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.260083 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.260128 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c5ab3f7fda42c05beb08b573d6a789289c81e3b360303abdda1f972319d41d1c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.260466 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.288182 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxt6n\" (UniqueName: \"kubernetes.io/projected/e13982bd-7f6a-4a64-8d3f-9a9ea29a4918-kube-api-access-xxt6n\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.608303 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.610184 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.612899 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.613014 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pjpc6" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.612919 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.620691 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.658629 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-memcached-tls-certs\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.658681 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxt78\" (UniqueName: \"kubernetes.io/projected/87e211e3-d481-4f80-939c-ace6358f3851-kube-api-access-dxt78\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.658933 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-combined-ca-bundle\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.659105 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-config-data\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.659155 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-kolla-config\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.759956 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-memcached-tls-certs\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.760013 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxt78\" (UniqueName: \"kubernetes.io/projected/87e211e3-d481-4f80-939c-ace6358f3851-kube-api-access-dxt78\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.760128 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-combined-ca-bundle\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.760213 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-config-data\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.760246 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-kolla-config\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.761220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-kolla-config\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.761595 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e211e3-d481-4f80-939c-ace6358f3851-config-data\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.774904 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-combined-ca-bundle\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.791967 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxt78\" (UniqueName: \"kubernetes.io/projected/87e211e3-d481-4f80-939c-ace6358f3851-kube-api-access-dxt78\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.800367 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/87e211e3-d481-4f80-939c-ace6358f3851-memcached-tls-certs\") pod \"memcached-0\" (UID: \"87e211e3-d481-4f80-939c-ace6358f3851\") " pod="openstack/memcached-0" Mar 10 20:13:07 crc kubenswrapper[4861]: I0310 20:13:07.801931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d35dde-b202-4fe0-b540-be5f8066b239\") pod \"openstack-cell1-galera-0\" (UID: \"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918\") " pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.037902 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.042832 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.624228 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 10 20:13:08 crc kubenswrapper[4861]: W0310 20:13:08.633933 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode13982bd_7f6a_4a64_8d3f_9a9ea29a4918.slice/crio-dd506a4ef1e37716e12eac91a4e0e6e9d8490592118bd8b2459dee80fdd1ea2c WatchSource:0}: Error finding container dd506a4ef1e37716e12eac91a4e0e6e9d8490592118bd8b2459dee80fdd1ea2c: Status 404 returned error can't find the container with id dd506a4ef1e37716e12eac91a4e0e6e9d8490592118bd8b2459dee80fdd1ea2c Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.641449 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 10 20:13:08 crc kubenswrapper[4861]: W0310 20:13:08.646495 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87e211e3_d481_4f80_939c_ace6358f3851.slice/crio-3470852ebad40a61870e2a2839691b67ff4a219ddf13fb2e4f81194968594fea WatchSource:0}: Error finding container 3470852ebad40a61870e2a2839691b67ff4a219ddf13fb2e4f81194968594fea: Status 404 returned error can't find the container with id 3470852ebad40a61870e2a2839691b67ff4a219ddf13fb2e4f81194968594fea Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.854542 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"87e211e3-d481-4f80-939c-ace6358f3851","Type":"ContainerStarted","Data":"3470852ebad40a61870e2a2839691b67ff4a219ddf13fb2e4f81194968594fea"} Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.856425 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918","Type":"ContainerStarted","Data":"3ffac7b3b8b204c501779e030151f338e485e2f75a690ed8b3bd66c4ed3e6f3e"} Mar 10 20:13:08 crc kubenswrapper[4861]: I0310 20:13:08.856504 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918","Type":"ContainerStarted","Data":"dd506a4ef1e37716e12eac91a4e0e6e9d8490592118bd8b2459dee80fdd1ea2c"} Mar 10 20:13:09 crc kubenswrapper[4861]: I0310 20:13:09.869891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"87e211e3-d481-4f80-939c-ace6358f3851","Type":"ContainerStarted","Data":"60a27292b84b9ef04cc768941b618b46d43956f2c5219d03a7633b3d647d71ab"} Mar 10 20:13:10 crc kubenswrapper[4861]: I0310 20:13:10.878959 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 10 20:13:11 crc kubenswrapper[4861]: I0310 20:13:11.892493 4861 generic.go:334] "Generic (PLEG): container finished" podID="2b59736f-62c4-499c-abad-c9ab4705c2ed" containerID="bae83ca78af669aa64f1c7fa25966a6f58e4657d0c14c7eb4b896cfc8a518051" exitCode=0 Mar 10 20:13:11 crc kubenswrapper[4861]: I0310 20:13:11.892627 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2b59736f-62c4-499c-abad-c9ab4705c2ed","Type":"ContainerDied","Data":"bae83ca78af669aa64f1c7fa25966a6f58e4657d0c14c7eb4b896cfc8a518051"} Mar 10 20:13:11 crc kubenswrapper[4861]: I0310 20:13:11.938883 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.938854167 podStartE2EDuration="4.938854167s" podCreationTimestamp="2026-03-10 20:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:09.896493369 +0000 UTC m=+5133.659929359" watchObservedRunningTime="2026-03-10 20:13:11.938854167 +0000 UTC m=+5135.702290167" Mar 10 20:13:12 crc kubenswrapper[4861]: I0310 20:13:12.916332 4861 generic.go:334] "Generic (PLEG): container finished" podID="e13982bd-7f6a-4a64-8d3f-9a9ea29a4918" containerID="3ffac7b3b8b204c501779e030151f338e485e2f75a690ed8b3bd66c4ed3e6f3e" exitCode=0 Mar 10 20:13:12 crc kubenswrapper[4861]: I0310 20:13:12.916435 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918","Type":"ContainerDied","Data":"3ffac7b3b8b204c501779e030151f338e485e2f75a690ed8b3bd66c4ed3e6f3e"} Mar 10 20:13:12 crc kubenswrapper[4861]: I0310 20:13:12.926200 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2b59736f-62c4-499c-abad-c9ab4705c2ed","Type":"ContainerStarted","Data":"5f4900a6ddeafc782b1e5b2efd09b5d658c9d1aea1fdd4a1969ffb9726d59757"} Mar 10 20:13:13 crc kubenswrapper[4861]: I0310 20:13:13.000527 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.000507837 podStartE2EDuration="9.000507837s" podCreationTimestamp="2026-03-10 20:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:12.999071148 +0000 UTC m=+5136.762507168" watchObservedRunningTime="2026-03-10 20:13:13.000507837 +0000 UTC m=+5136.763943807" Mar 10 20:13:13 crc kubenswrapper[4861]: I0310 20:13:13.044811 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 10 20:13:13 crc kubenswrapper[4861]: I0310 20:13:13.611967 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:13 crc kubenswrapper[4861]: I0310 20:13:13.875340 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:13:13 crc kubenswrapper[4861]: I0310 20:13:13.938891 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e13982bd-7f6a-4a64-8d3f-9a9ea29a4918","Type":"ContainerStarted","Data":"ea18f36b65afbc7a4449b95822f07625c365a2c1172adc953eb40cc481c2131c"} Mar 10 20:13:14 crc kubenswrapper[4861]: I0310 20:13:14.534360 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.534325506 podStartE2EDuration="8.534325506s" podCreationTimestamp="2026-03-10 20:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:14.519095721 +0000 UTC m=+5138.282531711" watchObservedRunningTime="2026-03-10 20:13:14.534325506 +0000 UTC m=+5138.297761496" Mar 10 20:13:14 crc kubenswrapper[4861]: I0310 20:13:14.552882 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:14 crc kubenswrapper[4861]: I0310 20:13:14.553661 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="dnsmasq-dns" containerID="cri-o://a10788e5dd867d0178a5072b0a2e31b8ab8a1a5de11eefffa3d1565e1ac726ed" gracePeriod=10 Mar 10 20:13:14 crc kubenswrapper[4861]: I0310 20:13:14.957682 4861 generic.go:334] "Generic (PLEG): container finished" podID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerID="a10788e5dd867d0178a5072b0a2e31b8ab8a1a5de11eefffa3d1565e1ac726ed" exitCode=0 Mar 10 20:13:14 crc kubenswrapper[4861]: I0310 20:13:14.974983 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" event={"ID":"b28c3324-39f9-4a5f-b120-d20a44a48a9c","Type":"ContainerDied","Data":"a10788e5dd867d0178a5072b0a2e31b8ab8a1a5de11eefffa3d1565e1ac726ed"} Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.071105 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.115201 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config\") pod \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.115289 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62j6m\" (UniqueName: \"kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m\") pod \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.115317 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc\") pod \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\" (UID: \"b28c3324-39f9-4a5f-b120-d20a44a48a9c\") " Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.123954 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m" (OuterVolumeSpecName: "kube-api-access-62j6m") pod "b28c3324-39f9-4a5f-b120-d20a44a48a9c" (UID: "b28c3324-39f9-4a5f-b120-d20a44a48a9c"). InnerVolumeSpecName "kube-api-access-62j6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.147440 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config" (OuterVolumeSpecName: "config") pod "b28c3324-39f9-4a5f-b120-d20a44a48a9c" (UID: "b28c3324-39f9-4a5f-b120-d20a44a48a9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.162330 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b28c3324-39f9-4a5f-b120-d20a44a48a9c" (UID: "b28c3324-39f9-4a5f-b120-d20a44a48a9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.218493 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.218568 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62j6m\" (UniqueName: \"kubernetes.io/projected/b28c3324-39f9-4a5f-b120-d20a44a48a9c-kube-api-access-62j6m\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.218597 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28c3324-39f9-4a5f-b120-d20a44a48a9c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.968834 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.968898 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.972147 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" event={"ID":"b28c3324-39f9-4a5f-b120-d20a44a48a9c","Type":"ContainerDied","Data":"0aa7b38e38e64b2dfc0e50dc59e0246837675cde68fae545498192f1dcaebe75"} Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.972228 4861 scope.go:117] "RemoveContainer" containerID="a10788e5dd867d0178a5072b0a2e31b8ab8a1a5de11eefffa3d1565e1ac726ed" Mar 10 20:13:15 crc kubenswrapper[4861]: I0310 20:13:15.972247 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-qxpzs" Mar 10 20:13:16 crc kubenswrapper[4861]: I0310 20:13:16.020689 4861 scope.go:117] "RemoveContainer" containerID="0cd8ebcb900167d93cea5b742f467078d15681a421efb4943383d6c443af30c7" Mar 10 20:13:16 crc kubenswrapper[4861]: I0310 20:13:16.026556 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:16 crc kubenswrapper[4861]: I0310 20:13:16.039126 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-qxpzs"] Mar 10 20:13:16 crc kubenswrapper[4861]: I0310 20:13:16.973406 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" path="/var/lib/kubelet/pods/b28c3324-39f9-4a5f-b120-d20a44a48a9c/volumes" Mar 10 20:13:18 crc kubenswrapper[4861]: I0310 20:13:18.038765 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:18 crc kubenswrapper[4861]: I0310 20:13:18.038828 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:18 crc kubenswrapper[4861]: I0310 20:13:18.142326 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:18 crc kubenswrapper[4861]: I0310 20:13:18.424406 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 10 20:13:18 crc kubenswrapper[4861]: I0310 20:13:18.535890 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 10 20:13:19 crc kubenswrapper[4861]: I0310 20:13:19.127090 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 10 20:13:21 crc kubenswrapper[4861]: I0310 20:13:21.991928 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:13:21 crc kubenswrapper[4861]: I0310 20:13:21.992229 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.624974 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hfnm9"] Mar 10 20:13:24 crc kubenswrapper[4861]: E0310 20:13:24.626094 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="init" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.626127 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="init" Mar 10 20:13:24 crc kubenswrapper[4861]: E0310 20:13:24.626167 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="dnsmasq-dns" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.626185 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="dnsmasq-dns" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.626624 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28c3324-39f9-4a5f-b120-d20a44a48a9c" containerName="dnsmasq-dns" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.627671 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.631490 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.636480 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hfnm9"] Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.704672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9cj\" (UniqueName: \"kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.705154 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.807247 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.807327 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9cj\" (UniqueName: \"kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.808299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.839852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9cj\" (UniqueName: \"kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj\") pod \"root-account-create-update-hfnm9\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:24 crc kubenswrapper[4861]: I0310 20:13:24.951730 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:25 crc kubenswrapper[4861]: I0310 20:13:25.558354 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hfnm9"] Mar 10 20:13:26 crc kubenswrapper[4861]: I0310 20:13:26.095294 4861 generic.go:334] "Generic (PLEG): container finished" podID="aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" containerID="57a01b824f16d9553619d67c9453569cd148dda8988e1125dfc8226bcfc9e055" exitCode=0 Mar 10 20:13:26 crc kubenswrapper[4861]: I0310 20:13:26.095565 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfnm9" event={"ID":"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6","Type":"ContainerDied","Data":"57a01b824f16d9553619d67c9453569cd148dda8988e1125dfc8226bcfc9e055"} Mar 10 20:13:26 crc kubenswrapper[4861]: I0310 20:13:26.095835 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfnm9" event={"ID":"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6","Type":"ContainerStarted","Data":"b457c5af01e05b3fdf5091b9859d3afbe0eedd4e3e2b4ec78a7ea5df6a1bacc3"} Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.494451 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.661482 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts9cj\" (UniqueName: \"kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj\") pod \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.661576 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts\") pod \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\" (UID: \"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6\") " Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.662676 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" (UID: "aa214fbc-3139-4b2e-bdfc-8c63b962e5d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.670247 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj" (OuterVolumeSpecName: "kube-api-access-ts9cj") pod "aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" (UID: "aa214fbc-3139-4b2e-bdfc-8c63b962e5d6"). InnerVolumeSpecName "kube-api-access-ts9cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.764844 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts9cj\" (UniqueName: \"kubernetes.io/projected/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-kube-api-access-ts9cj\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:27 crc kubenswrapper[4861]: I0310 20:13:27.764899 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:28 crc kubenswrapper[4861]: I0310 20:13:28.114913 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfnm9" event={"ID":"aa214fbc-3139-4b2e-bdfc-8c63b962e5d6","Type":"ContainerDied","Data":"b457c5af01e05b3fdf5091b9859d3afbe0eedd4e3e2b4ec78a7ea5df6a1bacc3"} Mar 10 20:13:28 crc kubenswrapper[4861]: I0310 20:13:28.114998 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b457c5af01e05b3fdf5091b9859d3afbe0eedd4e3e2b4ec78a7ea5df6a1bacc3" Mar 10 20:13:28 crc kubenswrapper[4861]: I0310 20:13:28.115028 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfnm9" Mar 10 20:13:31 crc kubenswrapper[4861]: I0310 20:13:31.105481 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hfnm9"] Mar 10 20:13:31 crc kubenswrapper[4861]: I0310 20:13:31.117115 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hfnm9"] Mar 10 20:13:32 crc kubenswrapper[4861]: I0310 20:13:32.974842 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" path="/var/lib/kubelet/pods/aa214fbc-3139-4b2e-bdfc-8c63b962e5d6/volumes" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.127241 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c4kgh"] Mar 10 20:13:36 crc kubenswrapper[4861]: E0310 20:13:36.127980 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" containerName="mariadb-account-create-update" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.127997 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" containerName="mariadb-account-create-update" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.128238 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa214fbc-3139-4b2e-bdfc-8c63b962e5d6" containerName="mariadb-account-create-update" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.128972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.131539 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.144082 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c4kgh"] Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.220949 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8cn\" (UniqueName: \"kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.221031 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.322847 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8cn\" (UniqueName: \"kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.322889 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.323685 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.345375 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8cn\" (UniqueName: \"kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn\") pod \"root-account-create-update-c4kgh\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.450796 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:36 crc kubenswrapper[4861]: I0310 20:13:36.701986 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c4kgh"] Mar 10 20:13:37 crc kubenswrapper[4861]: I0310 20:13:37.194241 4861 generic.go:334] "Generic (PLEG): container finished" podID="60616fa5-d557-49dc-850c-54b29c8080c7" containerID="d0258047871923cfb0a0138e1b3ba6c4da5cd5fc860714ac875e342fd4a05195" exitCode=0 Mar 10 20:13:37 crc kubenswrapper[4861]: I0310 20:13:37.194370 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c4kgh" event={"ID":"60616fa5-d557-49dc-850c-54b29c8080c7","Type":"ContainerDied","Data":"d0258047871923cfb0a0138e1b3ba6c4da5cd5fc860714ac875e342fd4a05195"} Mar 10 20:13:37 crc kubenswrapper[4861]: I0310 20:13:37.196044 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c4kgh" event={"ID":"60616fa5-d557-49dc-850c-54b29c8080c7","Type":"ContainerStarted","Data":"c89974f2c1662b8acfa68272d8c32eb6aeccfd8eacfbdc4ac61fc52ca3a331f9"} Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.552646 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.659626 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts\") pod \"60616fa5-d557-49dc-850c-54b29c8080c7\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.660814 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8cn\" (UniqueName: \"kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn\") pod \"60616fa5-d557-49dc-850c-54b29c8080c7\" (UID: \"60616fa5-d557-49dc-850c-54b29c8080c7\") " Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.660619 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60616fa5-d557-49dc-850c-54b29c8080c7" (UID: "60616fa5-d557-49dc-850c-54b29c8080c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.666606 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn" (OuterVolumeSpecName: "kube-api-access-9x8cn") pod "60616fa5-d557-49dc-850c-54b29c8080c7" (UID: "60616fa5-d557-49dc-850c-54b29c8080c7"). InnerVolumeSpecName "kube-api-access-9x8cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.763467 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8cn\" (UniqueName: \"kubernetes.io/projected/60616fa5-d557-49dc-850c-54b29c8080c7-kube-api-access-9x8cn\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:38 crc kubenswrapper[4861]: I0310 20:13:38.763528 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60616fa5-d557-49dc-850c-54b29c8080c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:13:39 crc kubenswrapper[4861]: I0310 20:13:39.210963 4861 generic.go:334] "Generic (PLEG): container finished" podID="daacc843-cac9-4039-a082-56a2fd9391f9" containerID="a1f2fa374fcde2b1a6400111146c13b03076a5c88c4417be4d60516309c63112" exitCode=0 Mar 10 20:13:39 crc kubenswrapper[4861]: I0310 20:13:39.211020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerDied","Data":"a1f2fa374fcde2b1a6400111146c13b03076a5c88c4417be4d60516309c63112"} Mar 10 20:13:39 crc kubenswrapper[4861]: I0310 20:13:39.213994 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c4kgh" event={"ID":"60616fa5-d557-49dc-850c-54b29c8080c7","Type":"ContainerDied","Data":"c89974f2c1662b8acfa68272d8c32eb6aeccfd8eacfbdc4ac61fc52ca3a331f9"} Mar 10 20:13:39 crc kubenswrapper[4861]: I0310 20:13:39.214276 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89974f2c1662b8acfa68272d8c32eb6aeccfd8eacfbdc4ac61fc52ca3a331f9" Mar 10 20:13:39 crc kubenswrapper[4861]: I0310 20:13:39.214212 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c4kgh" Mar 10 20:13:40 crc kubenswrapper[4861]: I0310 20:13:40.225061 4861 generic.go:334] "Generic (PLEG): container finished" podID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerID="7ae1e23c33c5be7e877fcdb77d99857c6890d4835a90260c0555fc46e0fe0878" exitCode=0 Mar 10 20:13:40 crc kubenswrapper[4861]: I0310 20:13:40.225223 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerDied","Data":"7ae1e23c33c5be7e877fcdb77d99857c6890d4835a90260c0555fc46e0fe0878"} Mar 10 20:13:40 crc kubenswrapper[4861]: I0310 20:13:40.228419 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerStarted","Data":"bb8f21d6ca8c4e3f7abf208bbe2396bbce10f7fdbfc86657add4e36e97a38032"} Mar 10 20:13:40 crc kubenswrapper[4861]: I0310 20:13:40.229268 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:40 crc kubenswrapper[4861]: I0310 20:13:40.334053 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.334024341 podStartE2EDuration="37.334024341s" podCreationTimestamp="2026-03-10 20:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:40.32883516 +0000 UTC m=+5164.092271170" watchObservedRunningTime="2026-03-10 20:13:40.334024341 +0000 UTC m=+5164.097460371" Mar 10 20:13:41 crc kubenswrapper[4861]: I0310 20:13:41.240243 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerStarted","Data":"fbaa0664200b14e2b861110b786873b8b950e88def72c41f5e152a5c5c4eaf9e"} Mar 10 20:13:41 crc kubenswrapper[4861]: I0310 20:13:41.240925 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 10 20:13:51 crc kubenswrapper[4861]: I0310 20:13:51.992147 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:13:51 crc kubenswrapper[4861]: I0310 20:13:51.992847 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:13:51 crc kubenswrapper[4861]: I0310 20:13:51.992908 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:13:51 crc kubenswrapper[4861]: I0310 20:13:51.993925 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:13:51 crc kubenswrapper[4861]: I0310 20:13:51.994026 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f" gracePeriod=600 Mar 10 20:13:52 crc kubenswrapper[4861]: E0310 20:13:52.229944 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod771189c2_452d_4204_a0b7_abfe9ba62bd0.slice/crio-conmon-3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod771189c2_452d_4204_a0b7_abfe9ba62bd0.slice/crio-3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f.scope\": RecentStats: unable to find data in memory cache]" Mar 10 20:13:52 crc kubenswrapper[4861]: I0310 20:13:52.332235 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f" exitCode=0 Mar 10 20:13:52 crc kubenswrapper[4861]: I0310 20:13:52.332285 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f"} Mar 10 20:13:52 crc kubenswrapper[4861]: I0310 20:13:52.332383 4861 scope.go:117] "RemoveContainer" containerID="66fd05d7ca4c39567b3943c90d9f448b022d1eff623e3d5891bb70d66e564943" Mar 10 20:13:53 crc kubenswrapper[4861]: I0310 20:13:53.347868 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca"} Mar 10 20:13:53 crc kubenswrapper[4861]: I0310 20:13:53.377047 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.377030422 podStartE2EDuration="50.377030422s" podCreationTimestamp="2026-03-10 20:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:13:41.279073539 +0000 UTC m=+5165.042509549" watchObservedRunningTime="2026-03-10 20:13:53.377030422 +0000 UTC m=+5177.140466382" Mar 10 20:13:55 crc kubenswrapper[4861]: I0310 20:13:55.069019 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:13:55 crc kubenswrapper[4861]: I0310 20:13:55.142927 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.160806 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552894-sz8dg"] Mar 10 20:14:00 crc kubenswrapper[4861]: E0310 20:14:00.162001 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60616fa5-d557-49dc-850c-54b29c8080c7" containerName="mariadb-account-create-update" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.162024 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="60616fa5-d557-49dc-850c-54b29c8080c7" containerName="mariadb-account-create-update" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.162303 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="60616fa5-d557-49dc-850c-54b29c8080c7" containerName="mariadb-account-create-update" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.163117 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.233388 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.234593 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.234823 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.245141 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552894-sz8dg"] Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.249295 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6f4\" (UniqueName: \"kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4\") pod \"auto-csr-approver-29552894-sz8dg\" (UID: \"1758ee36-7dff-46e2-b597-434fbf60334d\") " pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.351453 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6f4\" (UniqueName: \"kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4\") pod \"auto-csr-approver-29552894-sz8dg\" (UID: \"1758ee36-7dff-46e2-b597-434fbf60334d\") " pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.370902 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6f4\" (UniqueName: \"kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4\") pod \"auto-csr-approver-29552894-sz8dg\" (UID: \"1758ee36-7dff-46e2-b597-434fbf60334d\") " pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.551344 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.838909 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552894-sz8dg"] Mar 10 20:14:00 crc kubenswrapper[4861]: W0310 20:14:00.840816 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1758ee36_7dff_46e2_b597_434fbf60334d.slice/crio-0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341 WatchSource:0}: Error finding container 0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341: Status 404 returned error can't find the container with id 0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341 Mar 10 20:14:00 crc kubenswrapper[4861]: I0310 20:14:00.843764 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:14:01 crc kubenswrapper[4861]: I0310 20:14:01.415280 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" event={"ID":"1758ee36-7dff-46e2-b597-434fbf60334d","Type":"ContainerStarted","Data":"0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341"} Mar 10 20:14:02 crc kubenswrapper[4861]: I0310 20:14:02.426547 4861 generic.go:334] "Generic (PLEG): container finished" podID="1758ee36-7dff-46e2-b597-434fbf60334d" containerID="605c81f9b82753ffa9fc02eeb7df041b3925d1b11c15ec85c21f773ec9c9ff4e" exitCode=0 Mar 10 20:14:02 crc kubenswrapper[4861]: I0310 20:14:02.426673 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" event={"ID":"1758ee36-7dff-46e2-b597-434fbf60334d","Type":"ContainerDied","Data":"605c81f9b82753ffa9fc02eeb7df041b3925d1b11c15ec85c21f773ec9c9ff4e"} Mar 10 20:14:03 crc kubenswrapper[4861]: I0310 20:14:03.822276 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.003811 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n6f4\" (UniqueName: \"kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4\") pod \"1758ee36-7dff-46e2-b597-434fbf60334d\" (UID: \"1758ee36-7dff-46e2-b597-434fbf60334d\") " Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.013090 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4" (OuterVolumeSpecName: "kube-api-access-7n6f4") pod "1758ee36-7dff-46e2-b597-434fbf60334d" (UID: "1758ee36-7dff-46e2-b597-434fbf60334d"). InnerVolumeSpecName "kube-api-access-7n6f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.106959 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n6f4\" (UniqueName: \"kubernetes.io/projected/1758ee36-7dff-46e2-b597-434fbf60334d-kube-api-access-7n6f4\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.465798 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" event={"ID":"1758ee36-7dff-46e2-b597-434fbf60334d","Type":"ContainerDied","Data":"0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341"} Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.465889 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3005ec8bc912c2b11d38142f42a1608b1038a60972bd2f7d3967bdc4d29341" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.465891 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552894-sz8dg" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.688495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:14:04 crc kubenswrapper[4861]: E0310 20:14:04.688979 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1758ee36-7dff-46e2-b597-434fbf60334d" containerName="oc" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.689005 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1758ee36-7dff-46e2-b597-434fbf60334d" containerName="oc" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.689267 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1758ee36-7dff-46e2-b597-434fbf60334d" containerName="oc" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.690509 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.713977 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.816824 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.816879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjtv5\" (UniqueName: \"kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.816983 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.893982 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552888-dkjfg"] Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.894029 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552888-dkjfg"] Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.918234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.918323 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.918357 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjtv5\" (UniqueName: \"kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.919408 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.919459 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.946672 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjtv5\" (UniqueName: \"kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5\") pod \"dnsmasq-dns-66d5bf7c87-hzp8x\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:04 crc kubenswrapper[4861]: I0310 20:14:04.966685 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5fa081-b5f9-44b3-ba83-5b8401bf883e" path="/var/lib/kubelet/pods/5e5fa081-b5f9-44b3-ba83-5b8401bf883e/volumes" Mar 10 20:14:05 crc kubenswrapper[4861]: I0310 20:14:05.010685 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:05 crc kubenswrapper[4861]: I0310 20:14:05.316565 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:14:05 crc kubenswrapper[4861]: I0310 20:14:05.476781 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" event={"ID":"3aa249cd-45ec-4593-a974-3b909abdbd57","Type":"ContainerStarted","Data":"7d9a1b43a1054f6d030ceca2eab34de72ce61a0b243b6f44b2e15aff0cf722f6"} Mar 10 20:14:06 crc kubenswrapper[4861]: I0310 20:14:06.010802 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:06 crc kubenswrapper[4861]: I0310 20:14:06.074476 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:06 crc kubenswrapper[4861]: I0310 20:14:06.485578 4861 generic.go:334] "Generic (PLEG): container finished" podID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerID="03eed1693bb2f311edae051070bb719aaa307c00134c5f9a77b2c1122c211c9e" exitCode=0 Mar 10 20:14:06 crc kubenswrapper[4861]: I0310 20:14:06.485620 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" event={"ID":"3aa249cd-45ec-4593-a974-3b909abdbd57","Type":"ContainerDied","Data":"03eed1693bb2f311edae051070bb719aaa307c00134c5f9a77b2c1122c211c9e"} Mar 10 20:14:07 crc kubenswrapper[4861]: I0310 20:14:07.495778 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" event={"ID":"3aa249cd-45ec-4593-a974-3b909abdbd57","Type":"ContainerStarted","Data":"68019a020b1d65d7bb0ce56083c934c12e0c843cbd4083fb25a4b84ab1783111"} Mar 10 20:14:07 crc kubenswrapper[4861]: I0310 20:14:07.496338 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:07 crc kubenswrapper[4861]: I0310 20:14:07.520723 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" podStartSLOduration=3.520687036 podStartE2EDuration="3.520687036s" podCreationTimestamp="2026-03-10 20:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:14:07.515566807 +0000 UTC m=+5191.279002807" watchObservedRunningTime="2026-03-10 20:14:07.520687036 +0000 UTC m=+5191.284123006" Mar 10 20:14:10 crc kubenswrapper[4861]: I0310 20:14:10.024782 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="rabbitmq" containerID="cri-o://bb8f21d6ca8c4e3f7abf208bbe2396bbce10f7fdbfc86657add4e36e97a38032" gracePeriod=604797 Mar 10 20:14:10 crc kubenswrapper[4861]: I0310 20:14:10.112175 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="rabbitmq" containerID="cri-o://fbaa0664200b14e2b861110b786873b8b950e88def72c41f5e152a5c5c4eaf9e" gracePeriod=604796 Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.011878 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.071129 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.30:5671: connect: connection refused" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.078480 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.078922 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="dnsmasq-dns" containerID="cri-o://3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe" gracePeriod=10 Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.140508 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.31:5671: connect: connection refused" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.520201 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.609992 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config\") pod \"31f9d3df-2233-452c-835c-2da6c3ffb61e\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.610038 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txhjk\" (UniqueName: \"kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk\") pod \"31f9d3df-2233-452c-835c-2da6c3ffb61e\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.610065 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc\") pod \"31f9d3df-2233-452c-835c-2da6c3ffb61e\" (UID: \"31f9d3df-2233-452c-835c-2da6c3ffb61e\") " Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.615058 4861 generic.go:334] "Generic (PLEG): container finished" podID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerID="3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe" exitCode=0 Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.615133 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" event={"ID":"31f9d3df-2233-452c-835c-2da6c3ffb61e","Type":"ContainerDied","Data":"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe"} Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.615329 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" event={"ID":"31f9d3df-2233-452c-835c-2da6c3ffb61e","Type":"ContainerDied","Data":"b799a14c4d62b4884c156fad65475eedf8749aaab8dcfec24a54e2dc81ae89a5"} Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.615151 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-wwftj" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.615352 4861 scope.go:117] "RemoveContainer" containerID="3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.623920 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk" (OuterVolumeSpecName: "kube-api-access-txhjk") pod "31f9d3df-2233-452c-835c-2da6c3ffb61e" (UID: "31f9d3df-2233-452c-835c-2da6c3ffb61e"). InnerVolumeSpecName "kube-api-access-txhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.649493 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31f9d3df-2233-452c-835c-2da6c3ffb61e" (UID: "31f9d3df-2233-452c-835c-2da6c3ffb61e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.677316 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config" (OuterVolumeSpecName: "config") pod "31f9d3df-2233-452c-835c-2da6c3ffb61e" (UID: "31f9d3df-2233-452c-835c-2da6c3ffb61e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.700600 4861 scope.go:117] "RemoveContainer" containerID="c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.713611 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.713914 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txhjk\" (UniqueName: \"kubernetes.io/projected/31f9d3df-2233-452c-835c-2da6c3ffb61e-kube-api-access-txhjk\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.713929 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31f9d3df-2233-452c-835c-2da6c3ffb61e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.727894 4861 scope.go:117] "RemoveContainer" containerID="3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe" Mar 10 20:14:15 crc kubenswrapper[4861]: E0310 20:14:15.728501 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe\": container with ID starting with 3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe not found: ID does not exist" containerID="3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.728564 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe"} err="failed to get container status \"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe\": rpc error: code = NotFound desc = could not find container \"3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe\": container with ID starting with 3772bc760d1752ab2f0f8d7856cfb60b4e8f89139e63c6bf12d948820f32a3fe not found: ID does not exist" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.728607 4861 scope.go:117] "RemoveContainer" containerID="c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922" Mar 10 20:14:15 crc kubenswrapper[4861]: E0310 20:14:15.729200 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922\": container with ID starting with c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922 not found: ID does not exist" containerID="c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.729328 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922"} err="failed to get container status \"c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922\": rpc error: code = NotFound desc = could not find container \"c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922\": container with ID starting with c77c852521cd266e9338df6772f0b8f210bb220cfa0ac0c2ba2ab96ebea71922 not found: ID does not exist" Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.967084 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:14:15 crc kubenswrapper[4861]: I0310 20:14:15.977565 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-wwftj"] Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.627772 4861 generic.go:334] "Generic (PLEG): container finished" podID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerID="fbaa0664200b14e2b861110b786873b8b950e88def72c41f5e152a5c5c4eaf9e" exitCode=0 Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.627813 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerDied","Data":"fbaa0664200b14e2b861110b786873b8b950e88def72c41f5e152a5c5c4eaf9e"} Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.630262 4861 generic.go:334] "Generic (PLEG): container finished" podID="daacc843-cac9-4039-a082-56a2fd9391f9" containerID="bb8f21d6ca8c4e3f7abf208bbe2396bbce10f7fdbfc86657add4e36e97a38032" exitCode=0 Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.630306 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerDied","Data":"bb8f21d6ca8c4e3f7abf208bbe2396bbce10f7fdbfc86657add4e36e97a38032"} Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.688496 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.749217 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837102 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837147 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxzzw\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837178 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837261 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837286 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837360 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837381 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837435 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837461 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.837503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf\") pod \"daacc843-cac9-4039-a082-56a2fd9391f9\" (UID: \"daacc843-cac9-4039-a082-56a2fd9391f9\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.838353 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.838740 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.839089 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.841528 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info" (OuterVolumeSpecName: "pod-info") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.843173 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw" (OuterVolumeSpecName: "kube-api-access-zxzzw") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "kube-api-access-zxzzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.843918 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.843925 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.853395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608" (OuterVolumeSpecName: "persistence") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "pvc-0d291659-ee22-478e-ad65-e15c40684608". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.860021 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data" (OuterVolumeSpecName: "config-data") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.882172 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf" (OuterVolumeSpecName: "server-conf") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.914644 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "daacc843-cac9-4039-a082-56a2fd9391f9" (UID: "daacc843-cac9-4039-a082-56a2fd9391f9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938251 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938392 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938429 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938467 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938551 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938577 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938603 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938639 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938654 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938677 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl4zf\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.938692 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie\") pod \"435b131b-9c41-4320-9bf9-47bdb8011c07\" (UID: \"435b131b-9c41-4320-9bf9-47bdb8011c07\") " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.941259 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943151 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943180 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943192 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/daacc843-cac9-4039-a082-56a2fd9391f9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943201 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943209 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943218 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/daacc843-cac9-4039-a082-56a2fd9391f9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943226 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/daacc843-cac9-4039-a082-56a2fd9391f9-pod-info\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943234 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943243 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-server-conf\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943251 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxzzw\" (UniqueName: \"kubernetes.io/projected/daacc843-cac9-4039-a082-56a2fd9391f9-kube-api-access-zxzzw\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943282 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") on node \"crc\" " Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943293 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/daacc843-cac9-4039-a082-56a2fd9391f9-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.943290 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.944887 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.946724 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.948590 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.949076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf" (OuterVolumeSpecName: "kube-api-access-nl4zf") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "kube-api-access-nl4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.950303 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7" (OuterVolumeSpecName: "persistence") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.959052 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info" (OuterVolumeSpecName: "pod-info") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.968689 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" path="/var/lib/kubelet/pods/31f9d3df-2233-452c-835c-2da6c3ffb61e/volumes" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.970433 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.970612 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d291659-ee22-478e-ad65-e15c40684608" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608") on node "crc" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.972225 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data" (OuterVolumeSpecName: "config-data") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:16 crc kubenswrapper[4861]: I0310 20:14:16.989376 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf" (OuterVolumeSpecName: "server-conf") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.029834 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "435b131b-9c41-4320-9bf9-47bdb8011c07" (UID: "435b131b-9c41-4320-9bf9-47bdb8011c07"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044447 4861 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/435b131b-9c41-4320-9bf9-47bdb8011c07-pod-info\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044490 4861 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/435b131b-9c41-4320-9bf9-47bdb8011c07-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044504 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044518 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044530 4861 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044545 4861 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-server-conf\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044557 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl4zf\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-kube-api-access-nl4zf\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044569 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044580 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/435b131b-9c41-4320-9bf9-47bdb8011c07-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044622 4861 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") on node \"crc\" " Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.044637 4861 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/435b131b-9c41-4320-9bf9-47bdb8011c07-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.063548 4861 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.063735 4861 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7") on node "crc" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.146193 4861 reconciler_common.go:293] "Volume detached for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") on node \"crc\" DevicePath \"\"" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.644965 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"daacc843-cac9-4039-a082-56a2fd9391f9","Type":"ContainerDied","Data":"faacdf17c0b3c86b0cc3fa18ee549e644826179529f137d7a208d609ff90c23f"} Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.645083 4861 scope.go:117] "RemoveContainer" containerID="bb8f21d6ca8c4e3f7abf208bbe2396bbce10f7fdbfc86657add4e36e97a38032" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.645262 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.656124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"435b131b-9c41-4320-9bf9-47bdb8011c07","Type":"ContainerDied","Data":"82e3530046c8422a29d6f4cdf19dbcb9121b379c0b31e834f402769fa32fe4a8"} Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.656223 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.798445 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.802443 4861 scope.go:117] "RemoveContainer" containerID="a1f2fa374fcde2b1a6400111146c13b03076a5c88c4417be4d60516309c63112" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.806204 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.823279 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.830207 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.843120 4861 scope.go:117] "RemoveContainer" containerID="fbaa0664200b14e2b861110b786873b8b950e88def72c41f5e152a5c5c4eaf9e" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850177 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850563 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="dnsmasq-dns" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850575 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="dnsmasq-dns" Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850595 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850601 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850609 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="init" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850615 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="init" Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850630 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850635 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850654 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="setup-container" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850660 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="setup-container" Mar 10 20:14:17 crc kubenswrapper[4861]: E0310 20:14:17.850668 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="setup-container" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850674 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="setup-container" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850829 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f9d3df-2233-452c-835c-2da6c3ffb61e" containerName="dnsmasq-dns" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850845 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.850865 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" containerName="rabbitmq" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.851624 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857259 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zhvjm" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857415 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857440 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857443 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857490 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857444 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.857589 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.868018 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.869984 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.875689 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.875932 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.876097 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.876387 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k86vm" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.876559 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.876748 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.890633 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.891982 4861 scope.go:117] "RemoveContainer" containerID="7ae1e23c33c5be7e877fcdb77d99857c6890d4835a90260c0555fc46e0fe0878" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.892730 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.899732 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959667 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959767 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhhbl\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-kube-api-access-zhhbl\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959799 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959816 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959834 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959887 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959904 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959974 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-config-data\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.959992 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.960008 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:17 crc kubenswrapper[4861]: I0310 20:14:17.960043 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061265 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e85b342-2781-46f8-94e7-c36fd2de2f23-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061300 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061320 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061341 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061502 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbrm\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-kube-api-access-6zbrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061559 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061599 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhhbl\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-kube-api-access-zhhbl\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061626 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e85b342-2781-46f8-94e7-c36fd2de2f23-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061650 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061714 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061751 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061774 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061793 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061810 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061831 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061851 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061879 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061904 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-config-data\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061919 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.061934 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.062011 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.062333 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.062611 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.063139 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.063676 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-config-data\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.064849 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.066458 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.067060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.068769 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.069251 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.075764 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.075801 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ecf906be4f2ad685f642f2ece948ef22761add8f5cb32610f54e9cfa66403aa/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.077782 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhhbl\" (UniqueName: \"kubernetes.io/projected/c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1-kube-api-access-zhhbl\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.117016 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c1aca7ff-fbec-4ab6-8748-d984f7c669f7\") pod \"rabbitmq-server-0\" (UID: \"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1\") " pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163207 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbrm\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-kube-api-access-6zbrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163252 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163275 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e85b342-2781-46f8-94e7-c36fd2de2f23-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163293 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163321 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163340 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163362 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163425 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e85b342-2781-46f8-94e7-c36fd2de2f23-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163454 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163471 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.163490 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.164763 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.165503 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.165889 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.166977 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.167005 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d88da1940136618f2f85840ee5cac31587a56be4a4528ff96ecb79185fdc0bfd/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.167485 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e85b342-2781-46f8-94e7-c36fd2de2f23-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.167564 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.168305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.168890 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.169657 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e85b342-2781-46f8-94e7-c36fd2de2f23-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.169758 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e85b342-2781-46f8-94e7-c36fd2de2f23-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.179343 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbrm\" (UniqueName: \"kubernetes.io/projected/4e85b342-2781-46f8-94e7-c36fd2de2f23-kube-api-access-6zbrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.207043 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d291659-ee22-478e-ad65-e15c40684608\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d291659-ee22-478e-ad65-e15c40684608\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e85b342-2781-46f8-94e7-c36fd2de2f23\") " pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.228685 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.260239 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.560974 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.681201 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.682136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e85b342-2781-46f8-94e7-c36fd2de2f23","Type":"ContainerStarted","Data":"921d9998ea8c7a224546d9313e692eeb04020827d62b045de0881bdbbd693aa4"} Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.969064 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435b131b-9c41-4320-9bf9-47bdb8011c07" path="/var/lib/kubelet/pods/435b131b-9c41-4320-9bf9-47bdb8011c07/volumes" Mar 10 20:14:18 crc kubenswrapper[4861]: I0310 20:14:18.970171 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daacc843-cac9-4039-a082-56a2fd9391f9" path="/var/lib/kubelet/pods/daacc843-cac9-4039-a082-56a2fd9391f9/volumes" Mar 10 20:14:19 crc kubenswrapper[4861]: W0310 20:14:19.003385 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06bc3bf_61a5_4fe5_a9d4_2f37b4b85ca1.slice/crio-c7c4cfe53903f9ce6287da8e1714260c00f0b5bd729e3167bed84b364730ff4c WatchSource:0}: Error finding container c7c4cfe53903f9ce6287da8e1714260c00f0b5bd729e3167bed84b364730ff4c: Status 404 returned error can't find the container with id c7c4cfe53903f9ce6287da8e1714260c00f0b5bd729e3167bed84b364730ff4c Mar 10 20:14:19 crc kubenswrapper[4861]: I0310 20:14:19.722454 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1","Type":"ContainerStarted","Data":"c7c4cfe53903f9ce6287da8e1714260c00f0b5bd729e3167bed84b364730ff4c"} Mar 10 20:14:20 crc kubenswrapper[4861]: I0310 20:14:20.747053 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1","Type":"ContainerStarted","Data":"9b684f76a60faabc4925c11c2d97bfc6189449c3da72129109ab82adb791d3e0"} Mar 10 20:14:20 crc kubenswrapper[4861]: I0310 20:14:20.752772 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e85b342-2781-46f8-94e7-c36fd2de2f23","Type":"ContainerStarted","Data":"c3e70e29f77111b33c7606df2c16ef496019aee7fc1062d176d74f09bdb583f4"} Mar 10 20:14:54 crc kubenswrapper[4861]: I0310 20:14:54.065070 4861 generic.go:334] "Generic (PLEG): container finished" podID="4e85b342-2781-46f8-94e7-c36fd2de2f23" containerID="c3e70e29f77111b33c7606df2c16ef496019aee7fc1062d176d74f09bdb583f4" exitCode=0 Mar 10 20:14:54 crc kubenswrapper[4861]: I0310 20:14:54.065199 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e85b342-2781-46f8-94e7-c36fd2de2f23","Type":"ContainerDied","Data":"c3e70e29f77111b33c7606df2c16ef496019aee7fc1062d176d74f09bdb583f4"} Mar 10 20:14:55 crc kubenswrapper[4861]: I0310 20:14:55.074839 4861 generic.go:334] "Generic (PLEG): container finished" podID="c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1" containerID="9b684f76a60faabc4925c11c2d97bfc6189449c3da72129109ab82adb791d3e0" exitCode=0 Mar 10 20:14:55 crc kubenswrapper[4861]: I0310 20:14:55.074957 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1","Type":"ContainerDied","Data":"9b684f76a60faabc4925c11c2d97bfc6189449c3da72129109ab82adb791d3e0"} Mar 10 20:14:55 crc kubenswrapper[4861]: I0310 20:14:55.081890 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e85b342-2781-46f8-94e7-c36fd2de2f23","Type":"ContainerStarted","Data":"e0f09bbafedcc32dbd217a98b223801a513c66235cf7c6b2cdf294f341b11886"} Mar 10 20:14:55 crc kubenswrapper[4861]: I0310 20:14:55.082305 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:14:55 crc kubenswrapper[4861]: I0310 20:14:55.146969 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.146941793 podStartE2EDuration="38.146941793s" podCreationTimestamp="2026-03-10 20:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:14:55.140644792 +0000 UTC m=+5238.904080762" watchObservedRunningTime="2026-03-10 20:14:55.146941793 +0000 UTC m=+5238.910377763" Mar 10 20:14:56 crc kubenswrapper[4861]: I0310 20:14:56.092064 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1","Type":"ContainerStarted","Data":"95bb2528d1661932335a30dd9e951a9468db4c89acfc05bf61cb169caf526889"} Mar 10 20:14:56 crc kubenswrapper[4861]: I0310 20:14:56.092757 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 10 20:14:56 crc kubenswrapper[4861]: I0310 20:14:56.129198 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.129173172 podStartE2EDuration="39.129173172s" podCreationTimestamp="2026-03-10 20:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:14:56.126566351 +0000 UTC m=+5239.890002311" watchObservedRunningTime="2026-03-10 20:14:56.129173172 +0000 UTC m=+5239.892609142" Mar 10 20:14:57 crc kubenswrapper[4861]: I0310 20:14:57.512074 4861 scope.go:117] "RemoveContainer" containerID="c5a6eb30b58f3ec8e61cd111b76488edfb38e34c70cdc59b41a00697c133a6e3" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.153151 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv"] Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.155968 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.162060 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.162179 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.184277 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv"] Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.252528 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.252581 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.252613 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvm7\" (UniqueName: \"kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.354351 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.354606 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.354639 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvm7\" (UniqueName: \"kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.355560 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.402049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.402222 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvm7\" (UniqueName: \"kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7\") pod \"collect-profiles-29552895-fqbfv\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.531105 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:00 crc kubenswrapper[4861]: I0310 20:15:00.975278 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv"] Mar 10 20:15:01 crc kubenswrapper[4861]: I0310 20:15:01.141533 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" event={"ID":"7bb18524-6637-4fa0-bdde-a2825707c2a6","Type":"ContainerStarted","Data":"d3cd3aba3c423a51b13fcf42e27c10dc5abc9673d382bc9fe897e9476dff3188"} Mar 10 20:15:01 crc kubenswrapper[4861]: I0310 20:15:01.142032 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" event={"ID":"7bb18524-6637-4fa0-bdde-a2825707c2a6","Type":"ContainerStarted","Data":"6d0220404ad2ac5f3ff54b4d13d5aee034852b7cabb121d78b72e9317ca77e6e"} Mar 10 20:15:01 crc kubenswrapper[4861]: I0310 20:15:01.162413 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" podStartSLOduration=1.162398408 podStartE2EDuration="1.162398408s" podCreationTimestamp="2026-03-10 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:15:01.159795437 +0000 UTC m=+5244.923231397" watchObservedRunningTime="2026-03-10 20:15:01.162398408 +0000 UTC m=+5244.925834358" Mar 10 20:15:02 crc kubenswrapper[4861]: I0310 20:15:02.161818 4861 generic.go:334] "Generic (PLEG): container finished" podID="7bb18524-6637-4fa0-bdde-a2825707c2a6" containerID="d3cd3aba3c423a51b13fcf42e27c10dc5abc9673d382bc9fe897e9476dff3188" exitCode=0 Mar 10 20:15:02 crc kubenswrapper[4861]: I0310 20:15:02.161935 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" event={"ID":"7bb18524-6637-4fa0-bdde-a2825707c2a6","Type":"ContainerDied","Data":"d3cd3aba3c423a51b13fcf42e27c10dc5abc9673d382bc9fe897e9476dff3188"} Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.606622 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.609171 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume\") pod \"7bb18524-6637-4fa0-bdde-a2825707c2a6\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.609272 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvm7\" (UniqueName: \"kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7\") pod \"7bb18524-6637-4fa0-bdde-a2825707c2a6\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.610087 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7bb18524-6637-4fa0-bdde-a2825707c2a6" (UID: "7bb18524-6637-4fa0-bdde-a2825707c2a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.626786 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7" (OuterVolumeSpecName: "kube-api-access-drvm7") pod "7bb18524-6637-4fa0-bdde-a2825707c2a6" (UID: "7bb18524-6637-4fa0-bdde-a2825707c2a6"). InnerVolumeSpecName "kube-api-access-drvm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.710503 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume\") pod \"7bb18524-6637-4fa0-bdde-a2825707c2a6\" (UID: \"7bb18524-6637-4fa0-bdde-a2825707c2a6\") " Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.710790 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb18524-6637-4fa0-bdde-a2825707c2a6-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.710806 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvm7\" (UniqueName: \"kubernetes.io/projected/7bb18524-6637-4fa0-bdde-a2825707c2a6-kube-api-access-drvm7\") on node \"crc\" DevicePath \"\"" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.713560 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7bb18524-6637-4fa0-bdde-a2825707c2a6" (UID: "7bb18524-6637-4fa0-bdde-a2825707c2a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:15:03 crc kubenswrapper[4861]: I0310 20:15:03.812054 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7bb18524-6637-4fa0-bdde-a2825707c2a6-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.180964 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" event={"ID":"7bb18524-6637-4fa0-bdde-a2825707c2a6","Type":"ContainerDied","Data":"6d0220404ad2ac5f3ff54b4d13d5aee034852b7cabb121d78b72e9317ca77e6e"} Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.181057 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0220404ad2ac5f3ff54b4d13d5aee034852b7cabb121d78b72e9317ca77e6e" Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.181416 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552895-fqbfv" Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.285304 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv"] Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.296890 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552850-8nzzv"] Mar 10 20:15:04 crc kubenswrapper[4861]: I0310 20:15:04.978451 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab095e33-d713-4053-a55a-e7da9a9d1587" path="/var/lib/kubelet/pods/ab095e33-d713-4053-a55a-e7da9a9d1587/volumes" Mar 10 20:15:08 crc kubenswrapper[4861]: I0310 20:15:08.234253 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 10 20:15:08 crc kubenswrapper[4861]: I0310 20:15:08.266397 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.973183 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:11 crc kubenswrapper[4861]: E0310 20:15:11.974038 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb18524-6637-4fa0-bdde-a2825707c2a6" containerName="collect-profiles" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.974061 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb18524-6637-4fa0-bdde-a2825707c2a6" containerName="collect-profiles" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.974358 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb18524-6637-4fa0-bdde-a2825707c2a6" containerName="collect-profiles" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.975121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.981028 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kz9gf" Mar 10 20:15:11 crc kubenswrapper[4861]: I0310 20:15:11.987822 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:12 crc kubenswrapper[4861]: I0310 20:15:12.065451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595\") pod \"mariadb-client\" (UID: \"d7756074-6252-4f1e-a895-028e316da80b\") " pod="openstack/mariadb-client" Mar 10 20:15:12 crc kubenswrapper[4861]: I0310 20:15:12.167600 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595\") pod \"mariadb-client\" (UID: \"d7756074-6252-4f1e-a895-028e316da80b\") " pod="openstack/mariadb-client" Mar 10 20:15:12 crc kubenswrapper[4861]: I0310 20:15:12.203757 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595\") pod \"mariadb-client\" (UID: \"d7756074-6252-4f1e-a895-028e316da80b\") " pod="openstack/mariadb-client" Mar 10 20:15:12 crc kubenswrapper[4861]: I0310 20:15:12.304888 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:15:12 crc kubenswrapper[4861]: I0310 20:15:12.862681 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:12 crc kubenswrapper[4861]: W0310 20:15:12.871251 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7756074_6252_4f1e_a895_028e316da80b.slice/crio-14ec862b7f261d56d342b6fe89853e273658fb05905c6b69594ec2f9c32c2a83 WatchSource:0}: Error finding container 14ec862b7f261d56d342b6fe89853e273658fb05905c6b69594ec2f9c32c2a83: Status 404 returned error can't find the container with id 14ec862b7f261d56d342b6fe89853e273658fb05905c6b69594ec2f9c32c2a83 Mar 10 20:15:13 crc kubenswrapper[4861]: I0310 20:15:13.278932 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d7756074-6252-4f1e-a895-028e316da80b","Type":"ContainerStarted","Data":"14ec862b7f261d56d342b6fe89853e273658fb05905c6b69594ec2f9c32c2a83"} Mar 10 20:15:21 crc kubenswrapper[4861]: I0310 20:15:21.365927 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d7756074-6252-4f1e-a895-028e316da80b","Type":"ContainerStarted","Data":"6b9cf6530280d3c4db44a6f12f9a4eda315a4503bdb38493cc9fc4bf6c58687a"} Mar 10 20:15:21 crc kubenswrapper[4861]: I0310 20:15:21.390474 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=3.104636666 podStartE2EDuration="10.390450762s" podCreationTimestamp="2026-03-10 20:15:11 +0000 UTC" firstStartedPulling="2026-03-10 20:15:12.875032726 +0000 UTC m=+5256.638468706" lastFinishedPulling="2026-03-10 20:15:20.160846812 +0000 UTC m=+5263.924282802" observedRunningTime="2026-03-10 20:15:21.388291703 +0000 UTC m=+5265.151727693" watchObservedRunningTime="2026-03-10 20:15:21.390450762 +0000 UTC m=+5265.153886762" Mar 10 20:15:35 crc kubenswrapper[4861]: I0310 20:15:35.279739 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:35 crc kubenswrapper[4861]: I0310 20:15:35.280865 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="d7756074-6252-4f1e-a895-028e316da80b" containerName="mariadb-client" containerID="cri-o://6b9cf6530280d3c4db44a6f12f9a4eda315a4503bdb38493cc9fc4bf6c58687a" gracePeriod=30 Mar 10 20:15:35 crc kubenswrapper[4861]: I0310 20:15:35.514998 4861 generic.go:334] "Generic (PLEG): container finished" podID="d7756074-6252-4f1e-a895-028e316da80b" containerID="6b9cf6530280d3c4db44a6f12f9a4eda315a4503bdb38493cc9fc4bf6c58687a" exitCode=143 Mar 10 20:15:35 crc kubenswrapper[4861]: I0310 20:15:35.515110 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d7756074-6252-4f1e-a895-028e316da80b","Type":"ContainerDied","Data":"6b9cf6530280d3c4db44a6f12f9a4eda315a4503bdb38493cc9fc4bf6c58687a"} Mar 10 20:15:35 crc kubenswrapper[4861]: I0310 20:15:35.964042 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.021807 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595\") pod \"d7756074-6252-4f1e-a895-028e316da80b\" (UID: \"d7756074-6252-4f1e-a895-028e316da80b\") " Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.033185 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595" (OuterVolumeSpecName: "kube-api-access-zj595") pod "d7756074-6252-4f1e-a895-028e316da80b" (UID: "d7756074-6252-4f1e-a895-028e316da80b"). InnerVolumeSpecName "kube-api-access-zj595". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.124472 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj595\" (UniqueName: \"kubernetes.io/projected/d7756074-6252-4f1e-a895-028e316da80b-kube-api-access-zj595\") on node \"crc\" DevicePath \"\"" Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.528206 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d7756074-6252-4f1e-a895-028e316da80b","Type":"ContainerDied","Data":"14ec862b7f261d56d342b6fe89853e273658fb05905c6b69594ec2f9c32c2a83"} Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.528276 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.528286 4861 scope.go:117] "RemoveContainer" containerID="6b9cf6530280d3c4db44a6f12f9a4eda315a4503bdb38493cc9fc4bf6c58687a" Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.575022 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.585009 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:15:36 crc kubenswrapper[4861]: I0310 20:15:36.976528 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7756074-6252-4f1e-a895-028e316da80b" path="/var/lib/kubelet/pods/d7756074-6252-4f1e-a895-028e316da80b/volumes" Mar 10 20:15:57 crc kubenswrapper[4861]: I0310 20:15:57.635950 4861 scope.go:117] "RemoveContainer" containerID="b54c280a869472f44fa9a43c1c742443d37a059381310a2c6315ca52d30d7667" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.163835 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552896-ft86l"] Mar 10 20:16:00 crc kubenswrapper[4861]: E0310 20:16:00.164750 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7756074-6252-4f1e-a895-028e316da80b" containerName="mariadb-client" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.164772 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7756074-6252-4f1e-a895-028e316da80b" containerName="mariadb-client" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.165087 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7756074-6252-4f1e-a895-028e316da80b" containerName="mariadb-client" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.165965 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.168982 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.169419 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.175823 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.182437 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552896-ft86l"] Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.295612 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpt5\" (UniqueName: \"kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5\") pod \"auto-csr-approver-29552896-ft86l\" (UID: \"4b73f6e2-2d34-4f5b-a38a-4cc667165792\") " pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.397298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khpt5\" (UniqueName: \"kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5\") pod \"auto-csr-approver-29552896-ft86l\" (UID: \"4b73f6e2-2d34-4f5b-a38a-4cc667165792\") " pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.425982 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khpt5\" (UniqueName: \"kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5\") pod \"auto-csr-approver-29552896-ft86l\" (UID: \"4b73f6e2-2d34-4f5b-a38a-4cc667165792\") " pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:00 crc kubenswrapper[4861]: I0310 20:16:00.506922 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:01 crc kubenswrapper[4861]: I0310 20:16:01.097239 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552896-ft86l"] Mar 10 20:16:01 crc kubenswrapper[4861]: I0310 20:16:01.814202 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552896-ft86l" event={"ID":"4b73f6e2-2d34-4f5b-a38a-4cc667165792","Type":"ContainerStarted","Data":"c87821574ab2fd4da5155e0c637e89b4e400632b2cfae6bd9a3a0e40f215fa86"} Mar 10 20:16:02 crc kubenswrapper[4861]: I0310 20:16:02.827436 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b73f6e2-2d34-4f5b-a38a-4cc667165792" containerID="223675cdda1bf4fe85773a91fa23f535f2e9146aa14a9114bb4a2c1174a6c340" exitCode=0 Mar 10 20:16:02 crc kubenswrapper[4861]: I0310 20:16:02.827668 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552896-ft86l" event={"ID":"4b73f6e2-2d34-4f5b-a38a-4cc667165792","Type":"ContainerDied","Data":"223675cdda1bf4fe85773a91fa23f535f2e9146aa14a9114bb4a2c1174a6c340"} Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.208660 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.366310 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khpt5\" (UniqueName: \"kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5\") pod \"4b73f6e2-2d34-4f5b-a38a-4cc667165792\" (UID: \"4b73f6e2-2d34-4f5b-a38a-4cc667165792\") " Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.372499 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5" (OuterVolumeSpecName: "kube-api-access-khpt5") pod "4b73f6e2-2d34-4f5b-a38a-4cc667165792" (UID: "4b73f6e2-2d34-4f5b-a38a-4cc667165792"). InnerVolumeSpecName "kube-api-access-khpt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.467722 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khpt5\" (UniqueName: \"kubernetes.io/projected/4b73f6e2-2d34-4f5b-a38a-4cc667165792-kube-api-access-khpt5\") on node \"crc\" DevicePath \"\"" Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.848452 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552896-ft86l" event={"ID":"4b73f6e2-2d34-4f5b-a38a-4cc667165792","Type":"ContainerDied","Data":"c87821574ab2fd4da5155e0c637e89b4e400632b2cfae6bd9a3a0e40f215fa86"} Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.848515 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87821574ab2fd4da5155e0c637e89b4e400632b2cfae6bd9a3a0e40f215fa86" Mar 10 20:16:04 crc kubenswrapper[4861]: I0310 20:16:04.848589 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552896-ft86l" Mar 10 20:16:05 crc kubenswrapper[4861]: I0310 20:16:05.295761 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552890-zcwwr"] Mar 10 20:16:05 crc kubenswrapper[4861]: I0310 20:16:05.302631 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552890-zcwwr"] Mar 10 20:16:06 crc kubenswrapper[4861]: I0310 20:16:06.973334 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0684c6e-8f32-4e42-bad1-b4014f02a12d" path="/var/lib/kubelet/pods/e0684c6e-8f32-4e42-bad1-b4014f02a12d/volumes" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.572692 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:10 crc kubenswrapper[4861]: E0310 20:16:10.578467 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b73f6e2-2d34-4f5b-a38a-4cc667165792" containerName="oc" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.578548 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b73f6e2-2d34-4f5b-a38a-4cc667165792" containerName="oc" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.579274 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b73f6e2-2d34-4f5b-a38a-4cc667165792" containerName="oc" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.582422 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.603686 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.685764 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5tv\" (UniqueName: \"kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.685848 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.685920 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.787493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.787736 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5tv\" (UniqueName: \"kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.787786 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.788393 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.788464 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.898436 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5tv\" (UniqueName: \"kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv\") pod \"certified-operators-f5jmm\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:10 crc kubenswrapper[4861]: I0310 20:16:10.920124 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:11 crc kubenswrapper[4861]: I0310 20:16:11.474736 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:11 crc kubenswrapper[4861]: I0310 20:16:11.921276 4861 generic.go:334] "Generic (PLEG): container finished" podID="606707c4-5147-46fe-8ad9-f93568abcd26" containerID="22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e" exitCode=0 Mar 10 20:16:11 crc kubenswrapper[4861]: I0310 20:16:11.921342 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerDied","Data":"22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e"} Mar 10 20:16:11 crc kubenswrapper[4861]: I0310 20:16:11.921383 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerStarted","Data":"d1da167ec9f2166376593694fa5b0c32f91c24b4a65260862bb9f5a9fadf9d9e"} Mar 10 20:16:13 crc kubenswrapper[4861]: I0310 20:16:13.950346 4861 generic.go:334] "Generic (PLEG): container finished" podID="606707c4-5147-46fe-8ad9-f93568abcd26" containerID="2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22" exitCode=0 Mar 10 20:16:13 crc kubenswrapper[4861]: I0310 20:16:13.950535 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerDied","Data":"2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22"} Mar 10 20:16:14 crc kubenswrapper[4861]: I0310 20:16:14.974316 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerStarted","Data":"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a"} Mar 10 20:16:15 crc kubenswrapper[4861]: I0310 20:16:15.014256 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5jmm" podStartSLOduration=2.51505776 podStartE2EDuration="5.014216818s" podCreationTimestamp="2026-03-10 20:16:10 +0000 UTC" firstStartedPulling="2026-03-10 20:16:11.924098669 +0000 UTC m=+5315.687534669" lastFinishedPulling="2026-03-10 20:16:14.423257727 +0000 UTC m=+5318.186693727" observedRunningTime="2026-03-10 20:16:15.005981734 +0000 UTC m=+5318.769417724" watchObservedRunningTime="2026-03-10 20:16:15.014216818 +0000 UTC m=+5318.777652828" Mar 10 20:16:20 crc kubenswrapper[4861]: I0310 20:16:20.921174 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:20 crc kubenswrapper[4861]: I0310 20:16:20.922064 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:21 crc kubenswrapper[4861]: I0310 20:16:21.016331 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:21 crc kubenswrapper[4861]: I0310 20:16:21.219906 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:21 crc kubenswrapper[4861]: I0310 20:16:21.288909 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:21 crc kubenswrapper[4861]: I0310 20:16:21.991836 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:16:21 crc kubenswrapper[4861]: I0310 20:16:21.991917 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.162284 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5jmm" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="registry-server" containerID="cri-o://078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a" gracePeriod=2 Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.656619 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.740168 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r5tv\" (UniqueName: \"kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv\") pod \"606707c4-5147-46fe-8ad9-f93568abcd26\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.740291 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content\") pod \"606707c4-5147-46fe-8ad9-f93568abcd26\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.740366 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities\") pod \"606707c4-5147-46fe-8ad9-f93568abcd26\" (UID: \"606707c4-5147-46fe-8ad9-f93568abcd26\") " Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.742233 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities" (OuterVolumeSpecName: "utilities") pod "606707c4-5147-46fe-8ad9-f93568abcd26" (UID: "606707c4-5147-46fe-8ad9-f93568abcd26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.747465 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv" (OuterVolumeSpecName: "kube-api-access-7r5tv") pod "606707c4-5147-46fe-8ad9-f93568abcd26" (UID: "606707c4-5147-46fe-8ad9-f93568abcd26"). InnerVolumeSpecName "kube-api-access-7r5tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.825910 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "606707c4-5147-46fe-8ad9-f93568abcd26" (UID: "606707c4-5147-46fe-8ad9-f93568abcd26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.841654 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r5tv\" (UniqueName: \"kubernetes.io/projected/606707c4-5147-46fe-8ad9-f93568abcd26-kube-api-access-7r5tv\") on node \"crc\" DevicePath \"\"" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.841677 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:16:23 crc kubenswrapper[4861]: I0310 20:16:23.841687 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606707c4-5147-46fe-8ad9-f93568abcd26-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.176214 4861 generic.go:334] "Generic (PLEG): container finished" podID="606707c4-5147-46fe-8ad9-f93568abcd26" containerID="078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a" exitCode=0 Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.176282 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerDied","Data":"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a"} Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.176307 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5jmm" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.176335 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5jmm" event={"ID":"606707c4-5147-46fe-8ad9-f93568abcd26","Type":"ContainerDied","Data":"d1da167ec9f2166376593694fa5b0c32f91c24b4a65260862bb9f5a9fadf9d9e"} Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.176363 4861 scope.go:117] "RemoveContainer" containerID="078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.207777 4861 scope.go:117] "RemoveContainer" containerID="2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.239816 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.251341 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5jmm"] Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.252968 4861 scope.go:117] "RemoveContainer" containerID="22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.294386 4861 scope.go:117] "RemoveContainer" containerID="078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a" Mar 10 20:16:24 crc kubenswrapper[4861]: E0310 20:16:24.295067 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a\": container with ID starting with 078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a not found: ID does not exist" containerID="078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.295123 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a"} err="failed to get container status \"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a\": rpc error: code = NotFound desc = could not find container \"078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a\": container with ID starting with 078ed8b90146ef3568fa2e6bace7ac90b4b0b53a8fd015eb6658dc98b539fd8a not found: ID does not exist" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.295158 4861 scope.go:117] "RemoveContainer" containerID="2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22" Mar 10 20:16:24 crc kubenswrapper[4861]: E0310 20:16:24.295769 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22\": container with ID starting with 2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22 not found: ID does not exist" containerID="2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.295816 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22"} err="failed to get container status \"2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22\": rpc error: code = NotFound desc = could not find container \"2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22\": container with ID starting with 2ce304f12abe0759cb2acca97b2356309654821f550d23486b63209a010dee22 not found: ID does not exist" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.295843 4861 scope.go:117] "RemoveContainer" containerID="22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e" Mar 10 20:16:24 crc kubenswrapper[4861]: E0310 20:16:24.296606 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e\": container with ID starting with 22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e not found: ID does not exist" containerID="22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.296675 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e"} err="failed to get container status \"22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e\": rpc error: code = NotFound desc = could not find container \"22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e\": container with ID starting with 22a8a37bd1c70ae62d01ca3fa708040d247a050594841baae1abb4d67541d58e not found: ID does not exist" Mar 10 20:16:24 crc kubenswrapper[4861]: I0310 20:16:24.970727 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" path="/var/lib/kubelet/pods/606707c4-5147-46fe-8ad9-f93568abcd26/volumes" Mar 10 20:16:51 crc kubenswrapper[4861]: I0310 20:16:51.991952 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:16:51 crc kubenswrapper[4861]: I0310 20:16:51.992644 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:16:57 crc kubenswrapper[4861]: I0310 20:16:57.746044 4861 scope.go:117] "RemoveContainer" containerID="af1684d1783b9e105b959d5e5e85496c9aa22b0ed33724e09b6813e9e32acfe3" Mar 10 20:16:57 crc kubenswrapper[4861]: I0310 20:16:57.808967 4861 scope.go:117] "RemoveContainer" containerID="7c6b6e251055447fe35a44135e97cf62ba19e29b8e300773118daed6903c4694" Mar 10 20:17:21 crc kubenswrapper[4861]: I0310 20:17:21.992648 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:17:21 crc kubenswrapper[4861]: I0310 20:17:21.993539 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:17:21 crc kubenswrapper[4861]: I0310 20:17:21.993617 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:17:21 crc kubenswrapper[4861]: I0310 20:17:21.994769 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:17:21 crc kubenswrapper[4861]: I0310 20:17:21.994888 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" gracePeriod=600 Mar 10 20:17:22 crc kubenswrapper[4861]: E0310 20:17:22.129303 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:17:23 crc kubenswrapper[4861]: I0310 20:17:23.100250 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" exitCode=0 Mar 10 20:17:23 crc kubenswrapper[4861]: I0310 20:17:23.100313 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca"} Mar 10 20:17:23 crc kubenswrapper[4861]: I0310 20:17:23.100362 4861 scope.go:117] "RemoveContainer" containerID="3089d97ac4451c4b87e72224e8c126b31f30fe397bed57e36232876359b0059f" Mar 10 20:17:23 crc kubenswrapper[4861]: I0310 20:17:23.101195 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:17:23 crc kubenswrapper[4861]: E0310 20:17:23.101753 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:17:35 crc kubenswrapper[4861]: I0310 20:17:35.958688 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:17:35 crc kubenswrapper[4861]: E0310 20:17:35.959797 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:17:50 crc kubenswrapper[4861]: I0310 20:17:50.958495 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:17:50 crc kubenswrapper[4861]: E0310 20:17:50.959324 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.024640 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:17:51 crc kubenswrapper[4861]: E0310 20:17:51.025127 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="extract-utilities" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.025161 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="extract-utilities" Mar 10 20:17:51 crc kubenswrapper[4861]: E0310 20:17:51.025182 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="extract-content" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.025194 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="extract-content" Mar 10 20:17:51 crc kubenswrapper[4861]: E0310 20:17:51.025222 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="registry-server" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.025233 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="registry-server" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.025484 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="606707c4-5147-46fe-8ad9-f93568abcd26" containerName="registry-server" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.027271 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.032537 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.219511 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.219570 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlpkt\" (UniqueName: \"kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.219864 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.321590 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.321646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlpkt\" (UniqueName: \"kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.321728 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.322947 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.323382 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.351220 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlpkt\" (UniqueName: \"kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt\") pod \"community-operators-dnj69\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:51 crc kubenswrapper[4861]: I0310 20:17:51.648143 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:17:52 crc kubenswrapper[4861]: I0310 20:17:52.149820 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:17:52 crc kubenswrapper[4861]: W0310 20:17:52.155638 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f6343a_2421_48f5_94ca_ad3d78d7d9d0.slice/crio-d2911f61d25cbe431faa3f7cffd9ed910e93e7485255321dcee32d1985129ae2 WatchSource:0}: Error finding container d2911f61d25cbe431faa3f7cffd9ed910e93e7485255321dcee32d1985129ae2: Status 404 returned error can't find the container with id d2911f61d25cbe431faa3f7cffd9ed910e93e7485255321dcee32d1985129ae2 Mar 10 20:17:52 crc kubenswrapper[4861]: I0310 20:17:52.396476 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerStarted","Data":"40ec7fab39767b03910fa819ae3feaeb7915363d2b8fb9b829308da70f4a536e"} Mar 10 20:17:52 crc kubenswrapper[4861]: I0310 20:17:52.396979 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerStarted","Data":"d2911f61d25cbe431faa3f7cffd9ed910e93e7485255321dcee32d1985129ae2"} Mar 10 20:17:53 crc kubenswrapper[4861]: I0310 20:17:53.406551 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerID="40ec7fab39767b03910fa819ae3feaeb7915363d2b8fb9b829308da70f4a536e" exitCode=0 Mar 10 20:17:53 crc kubenswrapper[4861]: I0310 20:17:53.406640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerDied","Data":"40ec7fab39767b03910fa819ae3feaeb7915363d2b8fb9b829308da70f4a536e"} Mar 10 20:17:53 crc kubenswrapper[4861]: I0310 20:17:53.406874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerStarted","Data":"7719ef49d672c5e15c5b315272d67661386d56ccaed79923d356e95f41110804"} Mar 10 20:17:54 crc kubenswrapper[4861]: I0310 20:17:54.420061 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerID="7719ef49d672c5e15c5b315272d67661386d56ccaed79923d356e95f41110804" exitCode=0 Mar 10 20:17:54 crc kubenswrapper[4861]: I0310 20:17:54.420124 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerDied","Data":"7719ef49d672c5e15c5b315272d67661386d56ccaed79923d356e95f41110804"} Mar 10 20:17:55 crc kubenswrapper[4861]: I0310 20:17:55.434956 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerStarted","Data":"15bb6bc63ba2dc91b91a0d66f13748f359933945f5f20b81615d8c6406e9c1df"} Mar 10 20:17:55 crc kubenswrapper[4861]: I0310 20:17:55.469407 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dnj69" podStartSLOduration=2.95027983 podStartE2EDuration="5.469380091s" podCreationTimestamp="2026-03-10 20:17:50 +0000 UTC" firstStartedPulling="2026-03-10 20:17:52.399563533 +0000 UTC m=+5416.162999543" lastFinishedPulling="2026-03-10 20:17:54.918663814 +0000 UTC m=+5418.682099804" observedRunningTime="2026-03-10 20:17:55.460689254 +0000 UTC m=+5419.224125284" watchObservedRunningTime="2026-03-10 20:17:55.469380091 +0000 UTC m=+5419.232816081" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.151993 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552898-rhvsc"] Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.154207 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.161511 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.161860 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.163901 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.166794 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552898-rhvsc"] Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.200460 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vmg\" (UniqueName: \"kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg\") pod \"auto-csr-approver-29552898-rhvsc\" (UID: \"5265e2ef-073c-4e73-9930-782e7535c43d\") " pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.302576 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7vmg\" (UniqueName: \"kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg\") pod \"auto-csr-approver-29552898-rhvsc\" (UID: \"5265e2ef-073c-4e73-9930-782e7535c43d\") " pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.335084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7vmg\" (UniqueName: \"kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg\") pod \"auto-csr-approver-29552898-rhvsc\" (UID: \"5265e2ef-073c-4e73-9930-782e7535c43d\") " pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:00 crc kubenswrapper[4861]: I0310 20:18:00.482072 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:01 crc kubenswrapper[4861]: I0310 20:18:01.008323 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552898-rhvsc"] Mar 10 20:18:01 crc kubenswrapper[4861]: W0310 20:18:01.011976 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5265e2ef_073c_4e73_9930_782e7535c43d.slice/crio-bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0 WatchSource:0}: Error finding container bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0: Status 404 returned error can't find the container with id bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0 Mar 10 20:18:01 crc kubenswrapper[4861]: I0310 20:18:01.512784 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" event={"ID":"5265e2ef-073c-4e73-9930-782e7535c43d","Type":"ContainerStarted","Data":"bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0"} Mar 10 20:18:01 crc kubenswrapper[4861]: I0310 20:18:01.649140 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:01 crc kubenswrapper[4861]: I0310 20:18:01.649219 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:01 crc kubenswrapper[4861]: I0310 20:18:01.734487 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:02 crc kubenswrapper[4861]: I0310 20:18:02.526238 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" event={"ID":"5265e2ef-073c-4e73-9930-782e7535c43d","Type":"ContainerStarted","Data":"bc0b22c0b0456ba8391bb8d54dbb2a34da190bd66d5e29e970cc9d84d5570068"} Mar 10 20:18:02 crc kubenswrapper[4861]: I0310 20:18:02.550962 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" podStartSLOduration=1.616012196 podStartE2EDuration="2.550935577s" podCreationTimestamp="2026-03-10 20:18:00 +0000 UTC" firstStartedPulling="2026-03-10 20:18:01.015639898 +0000 UTC m=+5424.779075888" lastFinishedPulling="2026-03-10 20:18:01.950563269 +0000 UTC m=+5425.713999269" observedRunningTime="2026-03-10 20:18:02.546249549 +0000 UTC m=+5426.309685549" watchObservedRunningTime="2026-03-10 20:18:02.550935577 +0000 UTC m=+5426.314371567" Mar 10 20:18:02 crc kubenswrapper[4861]: I0310 20:18:02.592627 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:02 crc kubenswrapper[4861]: I0310 20:18:02.654975 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:18:02 crc kubenswrapper[4861]: I0310 20:18:02.958327 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:18:02 crc kubenswrapper[4861]: E0310 20:18:02.958750 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:18:03 crc kubenswrapper[4861]: I0310 20:18:03.535113 4861 generic.go:334] "Generic (PLEG): container finished" podID="5265e2ef-073c-4e73-9930-782e7535c43d" containerID="bc0b22c0b0456ba8391bb8d54dbb2a34da190bd66d5e29e970cc9d84d5570068" exitCode=0 Mar 10 20:18:03 crc kubenswrapper[4861]: I0310 20:18:03.535226 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" event={"ID":"5265e2ef-073c-4e73-9930-782e7535c43d","Type":"ContainerDied","Data":"bc0b22c0b0456ba8391bb8d54dbb2a34da190bd66d5e29e970cc9d84d5570068"} Mar 10 20:18:04 crc kubenswrapper[4861]: I0310 20:18:04.544631 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dnj69" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="registry-server" containerID="cri-o://15bb6bc63ba2dc91b91a0d66f13748f359933945f5f20b81615d8c6406e9c1df" gracePeriod=2 Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:04.932094 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:04.979679 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7vmg\" (UniqueName: \"kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg\") pod \"5265e2ef-073c-4e73-9930-782e7535c43d\" (UID: \"5265e2ef-073c-4e73-9930-782e7535c43d\") " Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:04.985853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg" (OuterVolumeSpecName: "kube-api-access-l7vmg") pod "5265e2ef-073c-4e73-9930-782e7535c43d" (UID: "5265e2ef-073c-4e73-9930-782e7535c43d"). InnerVolumeSpecName "kube-api-access-l7vmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.081563 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7vmg\" (UniqueName: \"kubernetes.io/projected/5265e2ef-073c-4e73-9930-782e7535c43d-kube-api-access-l7vmg\") on node \"crc\" DevicePath \"\"" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.567639 4861 generic.go:334] "Generic (PLEG): container finished" podID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerID="15bb6bc63ba2dc91b91a0d66f13748f359933945f5f20b81615d8c6406e9c1df" exitCode=0 Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.567872 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerDied","Data":"15bb6bc63ba2dc91b91a0d66f13748f359933945f5f20b81615d8c6406e9c1df"} Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.586222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" event={"ID":"5265e2ef-073c-4e73-9930-782e7535c43d","Type":"ContainerDied","Data":"bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0"} Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.586760 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5a101bcb801beadb08997c10e72bf70834e67c4f15ae35d455540e5da0cda0" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.587837 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552898-rhvsc" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.630257 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552892-c5kpk"] Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.641681 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552892-c5kpk"] Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.656322 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.834990 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities\") pod \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.835065 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlpkt\" (UniqueName: \"kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt\") pod \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.835144 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content\") pod \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\" (UID: \"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0\") " Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.835796 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities" (OuterVolumeSpecName: "utilities") pod "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" (UID: "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.842861 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt" (OuterVolumeSpecName: "kube-api-access-nlpkt") pod "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" (UID: "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0"). InnerVolumeSpecName "kube-api-access-nlpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.888287 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" (UID: "b5f6343a-2421-48f5-94ca-ad3d78d7d9d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.937517 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.937898 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlpkt\" (UniqueName: \"kubernetes.io/projected/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-kube-api-access-nlpkt\") on node \"crc\" DevicePath \"\"" Mar 10 20:18:05 crc kubenswrapper[4861]: I0310 20:18:05.937921 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.594390 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnj69" event={"ID":"b5f6343a-2421-48f5-94ca-ad3d78d7d9d0","Type":"ContainerDied","Data":"d2911f61d25cbe431faa3f7cffd9ed910e93e7485255321dcee32d1985129ae2"} Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.594444 4861 scope.go:117] "RemoveContainer" containerID="15bb6bc63ba2dc91b91a0d66f13748f359933945f5f20b81615d8c6406e9c1df" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.594465 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnj69" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.611366 4861 scope.go:117] "RemoveContainer" containerID="7719ef49d672c5e15c5b315272d67661386d56ccaed79923d356e95f41110804" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.628561 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.635148 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dnj69"] Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.642308 4861 scope.go:117] "RemoveContainer" containerID="40ec7fab39767b03910fa819ae3feaeb7915363d2b8fb9b829308da70f4a536e" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.973376 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" path="/var/lib/kubelet/pods/b5f6343a-2421-48f5-94ca-ad3d78d7d9d0/volumes" Mar 10 20:18:06 crc kubenswrapper[4861]: I0310 20:18:06.974649 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0" path="/var/lib/kubelet/pods/fa1859cc-10d1-4d25-8c3e-f17a18cd4ba0/volumes" Mar 10 20:18:15 crc kubenswrapper[4861]: I0310 20:18:15.958008 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:18:15 crc kubenswrapper[4861]: E0310 20:18:15.958570 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:18:29 crc kubenswrapper[4861]: I0310 20:18:29.958150 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:18:29 crc kubenswrapper[4861]: E0310 20:18:29.959255 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:18:41 crc kubenswrapper[4861]: I0310 20:18:41.958458 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:18:41 crc kubenswrapper[4861]: E0310 20:18:41.959450 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:18:55 crc kubenswrapper[4861]: I0310 20:18:55.960968 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:18:55 crc kubenswrapper[4861]: E0310 20:18:55.961990 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:18:57 crc kubenswrapper[4861]: I0310 20:18:57.924248 4861 scope.go:117] "RemoveContainer" containerID="851f789b57a61b9ef74c21f4bde4fdef2aaf5b5e25f18ae080512c61e60a3572" Mar 10 20:19:10 crc kubenswrapper[4861]: I0310 20:19:10.958761 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:19:10 crc kubenswrapper[4861]: E0310 20:19:10.959754 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.800140 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 10 20:19:11 crc kubenswrapper[4861]: E0310 20:19:11.800750 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="extract-content" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.800796 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="extract-content" Mar 10 20:19:11 crc kubenswrapper[4861]: E0310 20:19:11.800866 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="extract-utilities" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.800887 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="extract-utilities" Mar 10 20:19:11 crc kubenswrapper[4861]: E0310 20:19:11.800909 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="registry-server" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.800947 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="registry-server" Mar 10 20:19:11 crc kubenswrapper[4861]: E0310 20:19:11.800980 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5265e2ef-073c-4e73-9930-782e7535c43d" containerName="oc" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.800998 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5265e2ef-073c-4e73-9930-782e7535c43d" containerName="oc" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.801357 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5265e2ef-073c-4e73-9930-782e7535c43d" containerName="oc" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.801405 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f6343a-2421-48f5-94ca-ad3d78d7d9d0" containerName="registry-server" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.802505 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.805795 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kz9gf" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.812554 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.934517 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:11 crc kubenswrapper[4861]: I0310 20:19:11.935080 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94jb\" (UniqueName: \"kubernetes.io/projected/40721062-412e-4a4f-83b6-2e0692251cc0-kube-api-access-m94jb\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.036980 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.037155 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94jb\" (UniqueName: \"kubernetes.io/projected/40721062-412e-4a4f-83b6-2e0692251cc0-kube-api-access-m94jb\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.041568 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.041633 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a48f3a9669289c61f032348ec680f832cfe02e53ceeafa219253084ca235b6ea/globalmount\"" pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.081111 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94jb\" (UniqueName: \"kubernetes.io/projected/40721062-412e-4a4f-83b6-2e0692251cc0-kube-api-access-m94jb\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.088590 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5157b25-0c56-47b6-99e4-851f7ff5b69c\") pod \"mariadb-copy-data\" (UID: \"40721062-412e-4a4f-83b6-2e0692251cc0\") " pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.140778 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 10 20:19:12 crc kubenswrapper[4861]: I0310 20:19:12.522317 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 10 20:19:13 crc kubenswrapper[4861]: I0310 20:19:13.225228 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"40721062-412e-4a4f-83b6-2e0692251cc0","Type":"ContainerStarted","Data":"a8e3b4b2a8cebf353d44abab4747ba60015b4b906702e0d1f87ee0d14c9001c4"} Mar 10 20:19:13 crc kubenswrapper[4861]: I0310 20:19:13.225517 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"40721062-412e-4a4f-83b6-2e0692251cc0","Type":"ContainerStarted","Data":"f9e7971ae21b1d8de25803eb73ea5d11365f40f3fcf641ee7d71e40d9dfa8631"} Mar 10 20:19:13 crc kubenswrapper[4861]: I0310 20:19:13.264200 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.264180857 podStartE2EDuration="3.264180857s" podCreationTimestamp="2026-03-10 20:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:13.259930172 +0000 UTC m=+5497.023366132" watchObservedRunningTime="2026-03-10 20:19:13.264180857 +0000 UTC m=+5497.027616817" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.107762 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.109528 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.116260 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.217964 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ff69\" (UniqueName: \"kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69\") pod \"mariadb-client\" (UID: \"ffeca2e7-c928-46dc-a54a-44a59c1964e6\") " pod="openstack/mariadb-client" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.319781 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ff69\" (UniqueName: \"kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69\") pod \"mariadb-client\" (UID: \"ffeca2e7-c928-46dc-a54a-44a59c1964e6\") " pod="openstack/mariadb-client" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.356096 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ff69\" (UniqueName: \"kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69\") pod \"mariadb-client\" (UID: \"ffeca2e7-c928-46dc-a54a-44a59c1964e6\") " pod="openstack/mariadb-client" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.447829 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:16 crc kubenswrapper[4861]: I0310 20:19:16.793548 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:16 crc kubenswrapper[4861]: W0310 20:19:16.800906 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffeca2e7_c928_46dc_a54a_44a59c1964e6.slice/crio-15d0c51a5958ec74bb88cb17282e79a6c11c8e768845900febe4b7e08392fc49 WatchSource:0}: Error finding container 15d0c51a5958ec74bb88cb17282e79a6c11c8e768845900febe4b7e08392fc49: Status 404 returned error can't find the container with id 15d0c51a5958ec74bb88cb17282e79a6c11c8e768845900febe4b7e08392fc49 Mar 10 20:19:17 crc kubenswrapper[4861]: I0310 20:19:17.267050 4861 generic.go:334] "Generic (PLEG): container finished" podID="ffeca2e7-c928-46dc-a54a-44a59c1964e6" containerID="cad6766ee8c3b97b1ec21539b2877b104e6d141b82eb3b41da18f877bfe97612" exitCode=0 Mar 10 20:19:17 crc kubenswrapper[4861]: I0310 20:19:17.267143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ffeca2e7-c928-46dc-a54a-44a59c1964e6","Type":"ContainerDied","Data":"cad6766ee8c3b97b1ec21539b2877b104e6d141b82eb3b41da18f877bfe97612"} Mar 10 20:19:17 crc kubenswrapper[4861]: I0310 20:19:17.267216 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ffeca2e7-c928-46dc-a54a-44a59c1964e6","Type":"ContainerStarted","Data":"15d0c51a5958ec74bb88cb17282e79a6c11c8e768845900febe4b7e08392fc49"} Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.544170 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.570070 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_ffeca2e7-c928-46dc-a54a-44a59c1964e6/mariadb-client/0.log" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.606497 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.614617 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.662361 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ff69\" (UniqueName: \"kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69\") pod \"ffeca2e7-c928-46dc-a54a-44a59c1964e6\" (UID: \"ffeca2e7-c928-46dc-a54a-44a59c1964e6\") " Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.670116 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69" (OuterVolumeSpecName: "kube-api-access-6ff69") pod "ffeca2e7-c928-46dc-a54a-44a59c1964e6" (UID: "ffeca2e7-c928-46dc-a54a-44a59c1964e6"). InnerVolumeSpecName "kube-api-access-6ff69". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.765464 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ff69\" (UniqueName: \"kubernetes.io/projected/ffeca2e7-c928-46dc-a54a-44a59c1964e6-kube-api-access-6ff69\") on node \"crc\" DevicePath \"\"" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.779381 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:18 crc kubenswrapper[4861]: E0310 20:19:18.779852 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeca2e7-c928-46dc-a54a-44a59c1964e6" containerName="mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.779885 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeca2e7-c928-46dc-a54a-44a59c1964e6" containerName="mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.780168 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffeca2e7-c928-46dc-a54a-44a59c1964e6" containerName="mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.781826 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.833815 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.866992 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t7t6\" (UniqueName: \"kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6\") pod \"mariadb-client\" (UID: \"74aafd10-ffc9-46ea-afd3-659b208306b3\") " pod="openstack/mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.969288 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t7t6\" (UniqueName: \"kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6\") pod \"mariadb-client\" (UID: \"74aafd10-ffc9-46ea-afd3-659b208306b3\") " pod="openstack/mariadb-client" Mar 10 20:19:18 crc kubenswrapper[4861]: I0310 20:19:18.977392 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffeca2e7-c928-46dc-a54a-44a59c1964e6" path="/var/lib/kubelet/pods/ffeca2e7-c928-46dc-a54a-44a59c1964e6/volumes" Mar 10 20:19:19 crc kubenswrapper[4861]: I0310 20:19:19.009301 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t7t6\" (UniqueName: \"kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6\") pod \"mariadb-client\" (UID: \"74aafd10-ffc9-46ea-afd3-659b208306b3\") " pod="openstack/mariadb-client" Mar 10 20:19:19 crc kubenswrapper[4861]: I0310 20:19:19.109431 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:19 crc kubenswrapper[4861]: I0310 20:19:19.293910 4861 scope.go:117] "RemoveContainer" containerID="cad6766ee8c3b97b1ec21539b2877b104e6d141b82eb3b41da18f877bfe97612" Mar 10 20:19:19 crc kubenswrapper[4861]: I0310 20:19:19.294164 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:19 crc kubenswrapper[4861]: W0310 20:19:19.416020 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74aafd10_ffc9_46ea_afd3_659b208306b3.slice/crio-1892aa1f1749c9752086d2f9239f4dcf76363fe3a9e3fcee7440c838d16d802c WatchSource:0}: Error finding container 1892aa1f1749c9752086d2f9239f4dcf76363fe3a9e3fcee7440c838d16d802c: Status 404 returned error can't find the container with id 1892aa1f1749c9752086d2f9239f4dcf76363fe3a9e3fcee7440c838d16d802c Mar 10 20:19:19 crc kubenswrapper[4861]: I0310 20:19:19.422501 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:20 crc kubenswrapper[4861]: I0310 20:19:20.309688 4861 generic.go:334] "Generic (PLEG): container finished" podID="74aafd10-ffc9-46ea-afd3-659b208306b3" containerID="75f834c089d67fb9db0e20e967a2da241011c41b1a8541cb50cdbb1cdbe9f41d" exitCode=0 Mar 10 20:19:20 crc kubenswrapper[4861]: I0310 20:19:20.309857 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"74aafd10-ffc9-46ea-afd3-659b208306b3","Type":"ContainerDied","Data":"75f834c089d67fb9db0e20e967a2da241011c41b1a8541cb50cdbb1cdbe9f41d"} Mar 10 20:19:20 crc kubenswrapper[4861]: I0310 20:19:20.310325 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"74aafd10-ffc9-46ea-afd3-659b208306b3","Type":"ContainerStarted","Data":"1892aa1f1749c9752086d2f9239f4dcf76363fe3a9e3fcee7440c838d16d802c"} Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.706692 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.732588 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_74aafd10-ffc9-46ea-afd3-659b208306b3/mariadb-client/0.log" Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.771058 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.781298 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.821329 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t7t6\" (UniqueName: \"kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6\") pod \"74aafd10-ffc9-46ea-afd3-659b208306b3\" (UID: \"74aafd10-ffc9-46ea-afd3-659b208306b3\") " Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.829545 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6" (OuterVolumeSpecName: "kube-api-access-6t7t6") pod "74aafd10-ffc9-46ea-afd3-659b208306b3" (UID: "74aafd10-ffc9-46ea-afd3-659b208306b3"). InnerVolumeSpecName "kube-api-access-6t7t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:19:21 crc kubenswrapper[4861]: I0310 20:19:21.924024 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t7t6\" (UniqueName: \"kubernetes.io/projected/74aafd10-ffc9-46ea-afd3-659b208306b3-kube-api-access-6t7t6\") on node \"crc\" DevicePath \"\"" Mar 10 20:19:22 crc kubenswrapper[4861]: I0310 20:19:22.329222 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1892aa1f1749c9752086d2f9239f4dcf76363fe3a9e3fcee7440c838d16d802c" Mar 10 20:19:22 crc kubenswrapper[4861]: I0310 20:19:22.329328 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 10 20:19:22 crc kubenswrapper[4861]: I0310 20:19:22.973766 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74aafd10-ffc9-46ea-afd3-659b208306b3" path="/var/lib/kubelet/pods/74aafd10-ffc9-46ea-afd3-659b208306b3/volumes" Mar 10 20:19:23 crc kubenswrapper[4861]: I0310 20:19:23.958030 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:19:23 crc kubenswrapper[4861]: E0310 20:19:23.959470 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:19:36 crc kubenswrapper[4861]: I0310 20:19:36.971999 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:19:36 crc kubenswrapper[4861]: E0310 20:19:36.973286 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:19:47 crc kubenswrapper[4861]: I0310 20:19:47.958084 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:19:47 crc kubenswrapper[4861]: E0310 20:19:47.959138 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.892777 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 20:19:54 crc kubenswrapper[4861]: E0310 20:19:54.895565 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74aafd10-ffc9-46ea-afd3-659b208306b3" containerName="mariadb-client" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.895745 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74aafd10-ffc9-46ea-afd3-659b208306b3" containerName="mariadb-client" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.896145 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="74aafd10-ffc9-46ea-afd3-659b208306b3" containerName="mariadb-client" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.897657 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.911816 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.912199 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.912535 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-96kdv" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.912803 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.913211 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.929690 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.946421 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.966646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.966761 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-config\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.967902 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.967975 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.968048 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d7e2a99-c584-47be-bab2-3d39af057240\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7e2a99-c584-47be-bab2-3d39af057240\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.968084 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxw2g\" (UniqueName: \"kubernetes.io/projected/6937f060-bbcc-427e-9717-8c96952c10c0-kube-api-access-sxw2g\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.968120 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.968152 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.981125 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 10 20:19:54 crc kubenswrapper[4861]: I0310 20:19:54.983519 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.007449 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.016004 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.024809 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.069533 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d7e2a99-c584-47be-bab2-3d39af057240\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7e2a99-c584-47be-bab2-3d39af057240\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.069933 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx86k\" (UniqueName: \"kubernetes.io/projected/c6987688-b05a-4f80-bfbb-7062f55147d8-kube-api-access-nx86k\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.069964 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.069985 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxw2g\" (UniqueName: \"kubernetes.io/projected/6937f060-bbcc-427e-9717-8c96952c10c0-kube-api-access-sxw2g\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070050 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070084 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jstm\" (UniqueName: \"kubernetes.io/projected/597cad59-a9fa-4933-83cd-9df822b8c2c7-kube-api-access-6jstm\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070169 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070203 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070226 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070247 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-config\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070268 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-config\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070311 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070332 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070353 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070375 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070406 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070426 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070451 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070472 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070489 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.070520 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.072090 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.073387 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.074251 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.074283 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d7e2a99-c584-47be-bab2-3d39af057240\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7e2a99-c584-47be-bab2-3d39af057240\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7ef94b804c722d1476ebcf43f22734657e6eab3c3287fd359c8d052603a9908/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.077015 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6937f060-bbcc-427e-9717-8c96952c10c0-config\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.078729 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.079049 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.088202 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6937f060-bbcc-427e-9717-8c96952c10c0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.090468 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxw2g\" (UniqueName: \"kubernetes.io/projected/6937f060-bbcc-427e-9717-8c96952c10c0-kube-api-access-sxw2g\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.110031 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d7e2a99-c584-47be-bab2-3d39af057240\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7e2a99-c584-47be-bab2-3d39af057240\") pod \"ovsdbserver-nb-0\" (UID: \"6937f060-bbcc-427e-9717-8c96952c10c0\") " pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.171883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172187 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx86k\" (UniqueName: \"kubernetes.io/projected/c6987688-b05a-4f80-bfbb-7062f55147d8-kube-api-access-nx86k\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172322 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172469 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172589 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jstm\" (UniqueName: \"kubernetes.io/projected/597cad59-a9fa-4933-83cd-9df822b8c2c7-kube-api-access-6jstm\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172692 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172849 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.172956 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-config\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.173061 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.173174 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.173307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.173410 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.173534 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.174084 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-config\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.174784 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.174799 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.174931 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/597cad59-a9fa-4933-83cd-9df822b8c2c7-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175044 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175255 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175314 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175614 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175649 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/59397a5ebd854574aa610e5fb01a77496dace83842c39108f2c20f4156b776c4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.175808 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.176221 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.176254 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d404604ae095cf3563e1cb76737da40e18e329a37e51ab851bf21bed5448388f/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.176389 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6987688-b05a-4f80-bfbb-7062f55147d8-config\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.176410 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.179699 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.179822 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.180305 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.181511 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6987688-b05a-4f80-bfbb-7062f55147d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.183155 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/597cad59-a9fa-4933-83cd-9df822b8c2c7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.193198 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx86k\" (UniqueName: \"kubernetes.io/projected/c6987688-b05a-4f80-bfbb-7062f55147d8-kube-api-access-nx86k\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.198048 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jstm\" (UniqueName: \"kubernetes.io/projected/597cad59-a9fa-4933-83cd-9df822b8c2c7-kube-api-access-6jstm\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.210644 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b42f0f3f-f868-4b14-8b25-d6d054f25fe6\") pod \"ovsdbserver-nb-1\" (UID: \"597cad59-a9fa-4933-83cd-9df822b8c2c7\") " pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.211889 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9121c901-7d7b-424a-8a9a-d2e0a357a64d\") pod \"ovsdbserver-nb-2\" (UID: \"c6987688-b05a-4f80-bfbb-7062f55147d8\") " pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.246688 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.283512 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.308170 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.845341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 10 20:19:55 crc kubenswrapper[4861]: I0310 20:19:55.929310 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.657983 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"597cad59-a9fa-4933-83cd-9df822b8c2c7","Type":"ContainerStarted","Data":"6d5030a133dc464a37252deffcd6ed303bc85de904cbfa78656dbdef24204061"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.658967 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"597cad59-a9fa-4933-83cd-9df822b8c2c7","Type":"ContainerStarted","Data":"5e25ce249fd818fa64880d66a0927c47be3a750e5899ebf752c17195c11f06cd"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.658982 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"597cad59-a9fa-4933-83cd-9df822b8c2c7","Type":"ContainerStarted","Data":"0ac59fa31727baf7a46b1f5eb205742aa4dab69a02d27073bf27c07e0ff9e551"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.662109 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6937f060-bbcc-427e-9717-8c96952c10c0","Type":"ContainerStarted","Data":"29dc796ee723d1e5bc6ba4877fef8ba7c0e04c47f774131a7d0091adad7349cd"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.662139 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6937f060-bbcc-427e-9717-8c96952c10c0","Type":"ContainerStarted","Data":"81c5f885dae4903c2450a5d843b7cf1f5054d4e49e4f8b77eb7552ac5c68dba4"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.662152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6937f060-bbcc-427e-9717-8c96952c10c0","Type":"ContainerStarted","Data":"9d55b12bcbc75d0db8f19d5b20201ffd1bbae528cfb41516ee2eb1bd89cf113b"} Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.678954 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.678929338 podStartE2EDuration="3.678929338s" podCreationTimestamp="2026-03-10 20:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:56.67460714 +0000 UTC m=+5540.438043130" watchObservedRunningTime="2026-03-10 20:19:56.678929338 +0000 UTC m=+5540.442365338" Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.705602 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.705562672 podStartE2EDuration="3.705562672s" podCreationTimestamp="2026-03-10 20:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:56.695564021 +0000 UTC m=+5540.459000011" watchObservedRunningTime="2026-03-10 20:19:56.705562672 +0000 UTC m=+5540.468998672" Mar 10 20:19:56 crc kubenswrapper[4861]: W0310 20:19:56.970083 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6987688_b05a_4f80_bfbb_7062f55147d8.slice/crio-3deed3570ba3e1cc04890abe2b46d671d59bd863c1b8ee375480a994fe621952 WatchSource:0}: Error finding container 3deed3570ba3e1cc04890abe2b46d671d59bd863c1b8ee375480a994fe621952: Status 404 returned error can't find the container with id 3deed3570ba3e1cc04890abe2b46d671d59bd863c1b8ee375480a994fe621952 Mar 10 20:19:56 crc kubenswrapper[4861]: I0310 20:19:56.976369 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.210180 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.213139 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.216138 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.216364 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.216428 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.232644 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-q58nm" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.281654 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.294929 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.297231 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.305291 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.308068 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.313412 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.319461 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.419766 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.419846 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.419967 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420013 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420114 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420165 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420201 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420228 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420331 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420442 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-config\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420480 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rpd2\" (UniqueName: \"kubernetes.io/projected/28253164-2f24-45d4-8efd-26627728ca52-kube-api-access-4rpd2\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420527 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28253164-2f24-45d4-8efd-26627728ca52-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420658 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-config\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420762 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brfr\" (UniqueName: \"kubernetes.io/projected/f3163e8d-4aaa-4b26-bb68-40bfed734f01-kube-api-access-4brfr\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420825 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.420851 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-16dbd007-4428-42c3-a8ef-db04c2667370\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16dbd007-4428-42c3-a8ef-db04c2667370\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421133 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421234 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421299 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421327 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421376 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nfn\" (UniqueName: \"kubernetes.io/projected/2d5db324-92cd-450e-8f74-0d2df72352ba-kube-api-access-79nfn\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.421474 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.523741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brfr\" (UniqueName: \"kubernetes.io/projected/f3163e8d-4aaa-4b26-bb68-40bfed734f01-kube-api-access-4brfr\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524110 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524142 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-16dbd007-4428-42c3-a8ef-db04c2667370\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16dbd007-4428-42c3-a8ef-db04c2667370\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524173 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524196 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524279 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524307 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524343 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nfn\" (UniqueName: \"kubernetes.io/projected/2d5db324-92cd-450e-8f74-0d2df72352ba-kube-api-access-79nfn\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524386 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524412 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524444 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524474 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524502 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524538 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524566 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524596 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524619 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524647 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524673 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-config\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524693 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rpd2\" (UniqueName: \"kubernetes.io/projected/28253164-2f24-45d4-8efd-26627728ca52-kube-api-access-4rpd2\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524764 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28253164-2f24-45d4-8efd-26627728ca52-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524787 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.524812 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-config\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.525911 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-config\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.526224 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.526393 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.527260 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.527928 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.528883 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5db324-92cd-450e-8f74-0d2df72352ba-config\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.529698 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3163e8d-4aaa-4b26-bb68-40bfed734f01-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.530260 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28253164-2f24-45d4-8efd-26627728ca52-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.530544 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.530806 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28253164-2f24-45d4-8efd-26627728ca52-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.531566 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.533852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.533893 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28253164-2f24-45d4-8efd-26627728ca52-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.535112 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.538678 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.538946 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3163e8d-4aaa-4b26-bb68-40bfed734f01-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.545448 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.547598 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5db324-92cd-450e-8f74-0d2df72352ba-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.551289 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.551344 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d9b57cccfd4a6d6aad884442ad0e6173fdde049e66aa3afe9c7f875315c335ce/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.551435 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.551525 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-16dbd007-4428-42c3-a8ef-db04c2667370\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16dbd007-4428-42c3-a8ef-db04c2667370\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ce747a438f25fe097ee26f373bd9c67561c19beb2053f905b113771a93a0ef4/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.552226 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.552271 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a484c8ecc2849baf71eea4e8b5355280c4623f45c998f702a0e68887f5274d2/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.554112 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nfn\" (UniqueName: \"kubernetes.io/projected/2d5db324-92cd-450e-8f74-0d2df72352ba-kube-api-access-79nfn\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.559533 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brfr\" (UniqueName: \"kubernetes.io/projected/f3163e8d-4aaa-4b26-bb68-40bfed734f01-kube-api-access-4brfr\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.564365 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rpd2\" (UniqueName: \"kubernetes.io/projected/28253164-2f24-45d4-8efd-26627728ca52-kube-api-access-4rpd2\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.596060 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8f6637b-d88d-4bef-9745-3ddd74cb3e2b\") pod \"ovsdbserver-sb-0\" (UID: \"f3163e8d-4aaa-4b26-bb68-40bfed734f01\") " pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.613018 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-16dbd007-4428-42c3-a8ef-db04c2667370\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-16dbd007-4428-42c3-a8ef-db04c2667370\") pod \"ovsdbserver-sb-1\" (UID: \"28253164-2f24-45d4-8efd-26627728ca52\") " pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.613120 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d303ebf6-9506-4089-9fb6-aa447e46eb69\") pod \"ovsdbserver-sb-2\" (UID: \"2d5db324-92cd-450e-8f74-0d2df72352ba\") " pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.678598 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6987688-b05a-4f80-bfbb-7062f55147d8","Type":"ContainerStarted","Data":"c67e926f86bd1e9876a61a6bae3df77d8fcb17a4dce7aff90048a84475e70304"} Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.678698 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6987688-b05a-4f80-bfbb-7062f55147d8","Type":"ContainerStarted","Data":"1e2faffefec2e30d51bc0d41bc0b965d35f419f17a7c587e29bcb3401f6c0bb5"} Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.678756 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c6987688-b05a-4f80-bfbb-7062f55147d8","Type":"ContainerStarted","Data":"3deed3570ba3e1cc04890abe2b46d671d59bd863c1b8ee375480a994fe621952"} Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.694889 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.701679 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.707067 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.707051465 podStartE2EDuration="4.707051465s" podCreationTimestamp="2026-03-10 20:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:57.70064737 +0000 UTC m=+5541.464083330" watchObservedRunningTime="2026-03-10 20:19:57.707051465 +0000 UTC m=+5541.470487435" Mar 10 20:19:57 crc kubenswrapper[4861]: I0310 20:19:57.865635 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.033732 4861 scope.go:117] "RemoveContainer" containerID="57a01b824f16d9553619d67c9453569cd148dda8988e1125dfc8226bcfc9e055" Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.217234 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 10 20:19:58 crc kubenswrapper[4861]: W0310 20:19:58.225984 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3163e8d_4aaa_4b26_bb68_40bfed734f01.slice/crio-b783d173382362eaf38be22ffe8175c38760344fec9f81517f014b7d3ebe84be WatchSource:0}: Error finding container b783d173382362eaf38be22ffe8175c38760344fec9f81517f014b7d3ebe84be: Status 404 returned error can't find the container with id b783d173382362eaf38be22ffe8175c38760344fec9f81517f014b7d3ebe84be Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.247617 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.283987 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.309433 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.333877 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 10 20:19:58 crc kubenswrapper[4861]: W0310 20:19:58.335237 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28253164_2f24_45d4_8efd_26627728ca52.slice/crio-22491f2ccd71446e0afad774c5eec04a8feb1d39791000cff7fe1fd631f672c0 WatchSource:0}: Error finding container 22491f2ccd71446e0afad774c5eec04a8feb1d39791000cff7fe1fd631f672c0: Status 404 returned error can't find the container with id 22491f2ccd71446e0afad774c5eec04a8feb1d39791000cff7fe1fd631f672c0 Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.695148 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"28253164-2f24-45d4-8efd-26627728ca52","Type":"ContainerStarted","Data":"e91afde35a6f043b74ff47fac502b3afc243aab050ad73bc34efa436482f96fc"} Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.695691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"28253164-2f24-45d4-8efd-26627728ca52","Type":"ContainerStarted","Data":"22491f2ccd71446e0afad774c5eec04a8feb1d39791000cff7fe1fd631f672c0"} Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.698131 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3163e8d-4aaa-4b26-bb68-40bfed734f01","Type":"ContainerStarted","Data":"919b410e72ec1546b27afaa67d92a78fb01af9d6ab0f0b7969ef010d612f80e9"} Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.698195 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3163e8d-4aaa-4b26-bb68-40bfed734f01","Type":"ContainerStarted","Data":"b783d173382362eaf38be22ffe8175c38760344fec9f81517f014b7d3ebe84be"} Mar 10 20:19:58 crc kubenswrapper[4861]: I0310 20:19:58.725009 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.724986255 podStartE2EDuration="2.724986255s" podCreationTimestamp="2026-03-10 20:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:58.718700785 +0000 UTC m=+5542.482136745" watchObservedRunningTime="2026-03-10 20:19:58.724986255 +0000 UTC m=+5542.488422205" Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.003747 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.711219 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3163e8d-4aaa-4b26-bb68-40bfed734f01","Type":"ContainerStarted","Data":"6f611d7276a60dec96b686bbba4f15c0c1e3cd694942fd1774e2275cf8d6188d"} Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.714298 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2d5db324-92cd-450e-8f74-0d2df72352ba","Type":"ContainerStarted","Data":"0ec04e0f7be54f8def0360b977742aa6245d7bc300fcd752d24f502783a61d0e"} Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.714364 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2d5db324-92cd-450e-8f74-0d2df72352ba","Type":"ContainerStarted","Data":"affe643a07b70091f7be6552bfa849510cc237657932b9ee015f3fd6bd2fb183"} Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.714386 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"2d5db324-92cd-450e-8f74-0d2df72352ba","Type":"ContainerStarted","Data":"899bd0f5cf02da69a5ebd49253a4b6d9399383f17760012d709fb511eb7e8baf"} Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.717569 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"28253164-2f24-45d4-8efd-26627728ca52","Type":"ContainerStarted","Data":"04962abc55fd81aca70ac9dcd0b7943ecd00513613136c9a5c9d81dc3e9fbd5a"} Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.757486 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.757460792 podStartE2EDuration="3.757460792s" podCreationTimestamp="2026-03-10 20:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:59.746147604 +0000 UTC m=+5543.509583564" watchObservedRunningTime="2026-03-10 20:19:59.757460792 +0000 UTC m=+5543.520896792" Mar 10 20:19:59 crc kubenswrapper[4861]: I0310 20:19:59.781741 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.781686531 podStartE2EDuration="3.781686531s" podCreationTimestamp="2026-03-10 20:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:19:59.767955897 +0000 UTC m=+5543.531391857" watchObservedRunningTime="2026-03-10 20:19:59.781686531 +0000 UTC m=+5543.545122521" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.180103 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552900-knzjx"] Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.181864 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.184157 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.184262 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.184482 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.187267 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552900-knzjx"] Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.247777 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.284475 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.289325 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndghm\" (UniqueName: \"kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm\") pod \"auto-csr-approver-29552900-knzjx\" (UID: \"ab5d8759-dfce-43dc-9511-dd4d55b23b00\") " pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.308614 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.391113 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndghm\" (UniqueName: \"kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm\") pod \"auto-csr-approver-29552900-knzjx\" (UID: \"ab5d8759-dfce-43dc-9511-dd4d55b23b00\") " pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.428120 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndghm\" (UniqueName: \"kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm\") pod \"auto-csr-approver-29552900-knzjx\" (UID: \"ab5d8759-dfce-43dc-9511-dd4d55b23b00\") " pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.505174 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.695004 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.702248 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.747037 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552900-knzjx"] Mar 10 20:20:00 crc kubenswrapper[4861]: W0310 20:20:00.754992 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5d8759_dfce_43dc_9511_dd4d55b23b00.slice/crio-f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a WatchSource:0}: Error finding container f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a: Status 404 returned error can't find the container with id f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.757560 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:20:00 crc kubenswrapper[4861]: I0310 20:20:00.866606 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.318153 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.375428 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.385326 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.401076 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.472079 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.639055 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.640311 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.642739 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.664552 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.715653 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zsc\" (UniqueName: \"kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.715720 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.715746 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.715770 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.737691 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552900-knzjx" event={"ID":"ab5d8759-dfce-43dc-9511-dd4d55b23b00","Type":"ContainerStarted","Data":"f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a"} Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.816793 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zsc\" (UniqueName: \"kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.816871 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.816909 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.816944 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.817867 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.818209 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.818991 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.836286 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zsc\" (UniqueName: \"kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc\") pod \"dnsmasq-dns-75554b46df-wnksv\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:01 crc kubenswrapper[4861]: I0310 20:20:01.976247 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.231402 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:02 crc kubenswrapper[4861]: W0310 20:20:02.239369 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d9a7aed_a70a_4390_8eda_8c893880d8c5.slice/crio-68eedd7bd6f05d58740613e6704b42d303c58e5311a8c584923e4e0f3cea36e1 WatchSource:0}: Error finding container 68eedd7bd6f05d58740613e6704b42d303c58e5311a8c584923e4e0f3cea36e1: Status 404 returned error can't find the container with id 68eedd7bd6f05d58740613e6704b42d303c58e5311a8c584923e4e0f3cea36e1 Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.476994 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gkzl"] Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.478911 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.493503 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gkzl"] Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.528672 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-utilities\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.528796 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrt4\" (UniqueName: \"kubernetes.io/projected/0d359be0-9080-42cc-9d49-2e7604bab469-kube-api-access-gtrt4\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.528874 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-catalog-content\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.629908 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-utilities\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.630661 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtrt4\" (UniqueName: \"kubernetes.io/projected/0d359be0-9080-42cc-9d49-2e7604bab469-kube-api-access-gtrt4\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.630590 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-utilities\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.631093 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-catalog-content\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.631317 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d359be0-9080-42cc-9d49-2e7604bab469-catalog-content\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.652923 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtrt4\" (UniqueName: \"kubernetes.io/projected/0d359be0-9080-42cc-9d49-2e7604bab469-kube-api-access-gtrt4\") pod \"redhat-operators-8gkzl\" (UID: \"0d359be0-9080-42cc-9d49-2e7604bab469\") " pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.695073 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.702542 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.746268 4861 generic.go:334] "Generic (PLEG): container finished" podID="ab5d8759-dfce-43dc-9511-dd4d55b23b00" containerID="d3a7b1fb7b52202ae49c22aafea068ec87b86130fe3adc846096983a0e3605f1" exitCode=0 Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.746315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552900-knzjx" event={"ID":"ab5d8759-dfce-43dc-9511-dd4d55b23b00","Type":"ContainerDied","Data":"d3a7b1fb7b52202ae49c22aafea068ec87b86130fe3adc846096983a0e3605f1"} Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.748083 4861 generic.go:334] "Generic (PLEG): container finished" podID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerID="824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7" exitCode=0 Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.748186 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75554b46df-wnksv" event={"ID":"2d9a7aed-a70a-4390-8eda-8c893880d8c5","Type":"ContainerDied","Data":"824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7"} Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.748224 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75554b46df-wnksv" event={"ID":"2d9a7aed-a70a-4390-8eda-8c893880d8c5","Type":"ContainerStarted","Data":"68eedd7bd6f05d58740613e6704b42d303c58e5311a8c584923e4e0f3cea36e1"} Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.807609 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.867162 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 10 20:20:02 crc kubenswrapper[4861]: I0310 20:20:02.958737 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:20:02 crc kubenswrapper[4861]: E0310 20:20:02.959595 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.277292 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gkzl"] Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.744286 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.745926 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.826889 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75554b46df-wnksv" event={"ID":"2d9a7aed-a70a-4390-8eda-8c893880d8c5","Type":"ContainerStarted","Data":"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7"} Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.827115 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.838925 4861 generic.go:334] "Generic (PLEG): container finished" podID="0d359be0-9080-42cc-9d49-2e7604bab469" containerID="afa8cc3c66ce8eab8cb0b7586cff43a5112507dcea9fac77ac45ce99e59329d5" exitCode=0 Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.840987 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gkzl" event={"ID":"0d359be0-9080-42cc-9d49-2e7604bab469","Type":"ContainerDied","Data":"afa8cc3c66ce8eab8cb0b7586cff43a5112507dcea9fac77ac45ce99e59329d5"} Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.841015 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gkzl" event={"ID":"0d359be0-9080-42cc-9d49-2e7604bab469","Type":"ContainerStarted","Data":"fc9181b4415ae8009c3cea1a41c0a9a0238c5d057726540009af452b67bba7d4"} Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.871038 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75554b46df-wnksv" podStartSLOduration=2.871024171 podStartE2EDuration="2.871024171s" podCreationTimestamp="2026-03-10 20:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:03.868249096 +0000 UTC m=+5547.631685066" watchObservedRunningTime="2026-03-10 20:20:03.871024171 +0000 UTC m=+5547.634460131" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.898984 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 10 20:20:03 crc kubenswrapper[4861]: I0310 20:20:03.932384 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.020329 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.183794 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.221402 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.222753 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.239105 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.247351 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.356571 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.368557 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.368630 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.368653 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxr2\" (UniqueName: \"kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.368689 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.368733 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469332 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndghm\" (UniqueName: \"kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm\") pod \"ab5d8759-dfce-43dc-9511-dd4d55b23b00\" (UID: \"ab5d8759-dfce-43dc-9511-dd4d55b23b00\") " Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469728 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469773 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469834 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469883 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.469899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxr2\" (UniqueName: \"kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.470900 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.471141 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.471559 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.472139 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.474650 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm" (OuterVolumeSpecName: "kube-api-access-ndghm") pod "ab5d8759-dfce-43dc-9511-dd4d55b23b00" (UID: "ab5d8759-dfce-43dc-9511-dd4d55b23b00"). InnerVolumeSpecName "kube-api-access-ndghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.489481 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxr2\" (UniqueName: \"kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2\") pod \"dnsmasq-dns-f59dd4759-z89lm\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.571502 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndghm\" (UniqueName: \"kubernetes.io/projected/ab5d8759-dfce-43dc-9511-dd4d55b23b00-kube-api-access-ndghm\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.574436 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.855519 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552900-knzjx" event={"ID":"ab5d8759-dfce-43dc-9511-dd4d55b23b00","Type":"ContainerDied","Data":"f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a"} Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.855565 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f844aa9b5a52b790643fe2660e3e80afe3dbacdf210b29c5819607355e0e627a" Mar 10 20:20:04 crc kubenswrapper[4861]: I0310 20:20:04.855626 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552900-knzjx" Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.087341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:05 crc kubenswrapper[4861]: W0310 20:20:05.093787 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddf2cea_0c33_4026_9c21_21cf8a1c3dbf.slice/crio-72247731b85473b4504f1634cbded5d51a810edc50e7b5c21b3b6adda25030fd WatchSource:0}: Error finding container 72247731b85473b4504f1634cbded5d51a810edc50e7b5c21b3b6adda25030fd: Status 404 returned error can't find the container with id 72247731b85473b4504f1634cbded5d51a810edc50e7b5c21b3b6adda25030fd Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.332250 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.421263 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552894-sz8dg"] Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.425557 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552894-sz8dg"] Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.863780 4861 generic.go:334] "Generic (PLEG): container finished" podID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerID="2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2" exitCode=0 Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.863844 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" event={"ID":"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf","Type":"ContainerDied","Data":"2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2"} Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.864141 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" event={"ID":"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf","Type":"ContainerStarted","Data":"72247731b85473b4504f1634cbded5d51a810edc50e7b5c21b3b6adda25030fd"} Mar 10 20:20:05 crc kubenswrapper[4861]: I0310 20:20:05.864322 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75554b46df-wnksv" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="dnsmasq-dns" containerID="cri-o://d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7" gracePeriod=10 Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.385966 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.508363 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb\") pod \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.508504 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc\") pod \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.508529 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5zsc\" (UniqueName: \"kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc\") pod \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.508576 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config\") pod \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\" (UID: \"2d9a7aed-a70a-4390-8eda-8c893880d8c5\") " Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.518662 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc" (OuterVolumeSpecName: "kube-api-access-z5zsc") pod "2d9a7aed-a70a-4390-8eda-8c893880d8c5" (UID: "2d9a7aed-a70a-4390-8eda-8c893880d8c5"). InnerVolumeSpecName "kube-api-access-z5zsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.548143 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d9a7aed-a70a-4390-8eda-8c893880d8c5" (UID: "2d9a7aed-a70a-4390-8eda-8c893880d8c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.548938 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d9a7aed-a70a-4390-8eda-8c893880d8c5" (UID: "2d9a7aed-a70a-4390-8eda-8c893880d8c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.553052 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config" (OuterVolumeSpecName: "config") pod "2d9a7aed-a70a-4390-8eda-8c893880d8c5" (UID: "2d9a7aed-a70a-4390-8eda-8c893880d8c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.611235 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.611306 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.611331 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5zsc\" (UniqueName: \"kubernetes.io/projected/2d9a7aed-a70a-4390-8eda-8c893880d8c5-kube-api-access-z5zsc\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.611352 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d9a7aed-a70a-4390-8eda-8c893880d8c5-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.874188 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" event={"ID":"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf","Type":"ContainerStarted","Data":"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf"} Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.874298 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.877804 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75554b46df-wnksv" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.882906 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75554b46df-wnksv" event={"ID":"2d9a7aed-a70a-4390-8eda-8c893880d8c5","Type":"ContainerDied","Data":"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7"} Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.882960 4861 scope.go:117] "RemoveContainer" containerID="d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.877775 4861 generic.go:334] "Generic (PLEG): container finished" podID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerID="d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7" exitCode=0 Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.894617 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75554b46df-wnksv" event={"ID":"2d9a7aed-a70a-4390-8eda-8c893880d8c5","Type":"ContainerDied","Data":"68eedd7bd6f05d58740613e6704b42d303c58e5311a8c584923e4e0f3cea36e1"} Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.917806 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" podStartSLOduration=2.917781441 podStartE2EDuration="2.917781441s" podCreationTimestamp="2026-03-10 20:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:06.90084566 +0000 UTC m=+5550.664281660" watchObservedRunningTime="2026-03-10 20:20:06.917781441 +0000 UTC m=+5550.681217431" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.932621 4861 scope.go:117] "RemoveContainer" containerID="824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.934362 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.945473 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75554b46df-wnksv"] Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.953344 4861 scope.go:117] "RemoveContainer" containerID="d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7" Mar 10 20:20:06 crc kubenswrapper[4861]: E0310 20:20:06.954224 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7\": container with ID starting with d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7 not found: ID does not exist" containerID="d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.954266 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7"} err="failed to get container status \"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7\": rpc error: code = NotFound desc = could not find container \"d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7\": container with ID starting with d0b3727548eea71a19cf33e3f7d0beeee4156cbfd5f29659e5312b5bb01e30e7 not found: ID does not exist" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.954291 4861 scope.go:117] "RemoveContainer" containerID="824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7" Mar 10 20:20:06 crc kubenswrapper[4861]: E0310 20:20:06.955559 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7\": container with ID starting with 824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7 not found: ID does not exist" containerID="824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.955600 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7"} err="failed to get container status \"824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7\": rpc error: code = NotFound desc = could not find container \"824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7\": container with ID starting with 824c349b13d1c8766c1150e5161ea66f2c7ccb045634bc51af74929f3dbf16e7 not found: ID does not exist" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.969175 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1758ee36-7dff-46e2-b597-434fbf60334d" path="/var/lib/kubelet/pods/1758ee36-7dff-46e2-b597-434fbf60334d/volumes" Mar 10 20:20:06 crc kubenswrapper[4861]: I0310 20:20:06.970116 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" path="/var/lib/kubelet/pods/2d9a7aed-a70a-4390-8eda-8c893880d8c5/volumes" Mar 10 20:20:07 crc kubenswrapper[4861]: I0310 20:20:07.751369 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.115439 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 10 20:20:10 crc kubenswrapper[4861]: E0310 20:20:10.117905 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="dnsmasq-dns" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.117975 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="dnsmasq-dns" Mar 10 20:20:10 crc kubenswrapper[4861]: E0310 20:20:10.118029 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5d8759-dfce-43dc-9511-dd4d55b23b00" containerName="oc" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.118076 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5d8759-dfce-43dc-9511-dd4d55b23b00" containerName="oc" Mar 10 20:20:10 crc kubenswrapper[4861]: E0310 20:20:10.118135 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="init" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.118180 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="init" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.118367 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5d8759-dfce-43dc-9511-dd4d55b23b00" containerName="oc" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.118435 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9a7aed-a70a-4390-8eda-8c893880d8c5" containerName="dnsmasq-dns" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.119001 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.123258 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.133877 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.173186 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/515ad971-29e5-4b2a-ab58-2e06d4d29c99-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.173397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.173509 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pkh\" (UniqueName: \"kubernetes.io/projected/515ad971-29e5-4b2a-ab58-2e06d4d29c99-kube-api-access-w2pkh\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.275476 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/515ad971-29e5-4b2a-ab58-2e06d4d29c99-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.275578 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.275638 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pkh\" (UniqueName: \"kubernetes.io/projected/515ad971-29e5-4b2a-ab58-2e06d4d29c99-kube-api-access-w2pkh\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.279283 4861 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.279333 4861 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2133d7956cea2a029706636677189e592bcfa82811c6aa21ed6f629d16e149e9/globalmount\"" pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.296096 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/515ad971-29e5-4b2a-ab58-2e06d4d29c99-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.297412 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pkh\" (UniqueName: \"kubernetes.io/projected/515ad971-29e5-4b2a-ab58-2e06d4d29c99-kube-api-access-w2pkh\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.334447 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2694f129-bd0c-4028-a860-6f1fb7f4278e\") pod \"ovn-copy-data\" (UID: \"515ad971-29e5-4b2a-ab58-2e06d4d29c99\") " pod="openstack/ovn-copy-data" Mar 10 20:20:10 crc kubenswrapper[4861]: I0310 20:20:10.461542 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 10 20:20:12 crc kubenswrapper[4861]: I0310 20:20:12.951104 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gkzl" event={"ID":"0d359be0-9080-42cc-9d49-2e7604bab469","Type":"ContainerStarted","Data":"80ad9aeb1b62a4f3dcbf1e7620aadae81ec663241865cb7277d1fc82a2ca5ad6"} Mar 10 20:20:12 crc kubenswrapper[4861]: I0310 20:20:12.985779 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 10 20:20:12 crc kubenswrapper[4861]: W0310 20:20:12.989524 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515ad971_29e5_4b2a_ab58_2e06d4d29c99.slice/crio-3d7debbad7a694236c44279b0e33078631c2e0de19288ed885fa871fdd67a437 WatchSource:0}: Error finding container 3d7debbad7a694236c44279b0e33078631c2e0de19288ed885fa871fdd67a437: Status 404 returned error can't find the container with id 3d7debbad7a694236c44279b0e33078631c2e0de19288ed885fa871fdd67a437 Mar 10 20:20:13 crc kubenswrapper[4861]: I0310 20:20:13.964831 4861 generic.go:334] "Generic (PLEG): container finished" podID="0d359be0-9080-42cc-9d49-2e7604bab469" containerID="80ad9aeb1b62a4f3dcbf1e7620aadae81ec663241865cb7277d1fc82a2ca5ad6" exitCode=0 Mar 10 20:20:13 crc kubenswrapper[4861]: I0310 20:20:13.964901 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gkzl" event={"ID":"0d359be0-9080-42cc-9d49-2e7604bab469","Type":"ContainerDied","Data":"80ad9aeb1b62a4f3dcbf1e7620aadae81ec663241865cb7277d1fc82a2ca5ad6"} Mar 10 20:20:13 crc kubenswrapper[4861]: I0310 20:20:13.966787 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"515ad971-29e5-4b2a-ab58-2e06d4d29c99","Type":"ContainerStarted","Data":"3d7debbad7a694236c44279b0e33078631c2e0de19288ed885fa871fdd67a437"} Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.575976 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.657519 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.657867 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="dnsmasq-dns" containerID="cri-o://68019a020b1d65d7bb0ce56083c934c12e0c843cbd4083fb25a4b84ab1783111" gracePeriod=10 Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.984461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gkzl" event={"ID":"0d359be0-9080-42cc-9d49-2e7604bab469","Type":"ContainerStarted","Data":"1f600fa0a58224dd42ebf8be0a81c70d4225efe975a778c7b2cda667fadb53aa"} Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.988565 4861 generic.go:334] "Generic (PLEG): container finished" podID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerID="68019a020b1d65d7bb0ce56083c934c12e0c843cbd4083fb25a4b84ab1783111" exitCode=0 Mar 10 20:20:14 crc kubenswrapper[4861]: I0310 20:20:14.988594 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" event={"ID":"3aa249cd-45ec-4593-a974-3b909abdbd57","Type":"ContainerDied","Data":"68019a020b1d65d7bb0ce56083c934c12e0c843cbd4083fb25a4b84ab1783111"} Mar 10 20:20:15 crc kubenswrapper[4861]: I0310 20:20:15.008921 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gkzl" podStartSLOduration=2.469712593 podStartE2EDuration="13.008907872s" podCreationTimestamp="2026-03-10 20:20:02 +0000 UTC" firstStartedPulling="2026-03-10 20:20:03.8415814 +0000 UTC m=+5547.605017360" lastFinishedPulling="2026-03-10 20:20:14.380776679 +0000 UTC m=+5558.144212639" observedRunningTime="2026-03-10 20:20:15.005751236 +0000 UTC m=+5558.769187196" watchObservedRunningTime="2026-03-10 20:20:15.008907872 +0000 UTC m=+5558.772343832" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.395278 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.489166 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config\") pod \"3aa249cd-45ec-4593-a974-3b909abdbd57\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.489430 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc\") pod \"3aa249cd-45ec-4593-a974-3b909abdbd57\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.489508 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjtv5\" (UniqueName: \"kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5\") pod \"3aa249cd-45ec-4593-a974-3b909abdbd57\" (UID: \"3aa249cd-45ec-4593-a974-3b909abdbd57\") " Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.495020 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5" (OuterVolumeSpecName: "kube-api-access-zjtv5") pod "3aa249cd-45ec-4593-a974-3b909abdbd57" (UID: "3aa249cd-45ec-4593-a974-3b909abdbd57"). InnerVolumeSpecName "kube-api-access-zjtv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.545395 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config" (OuterVolumeSpecName: "config") pod "3aa249cd-45ec-4593-a974-3b909abdbd57" (UID: "3aa249cd-45ec-4593-a974-3b909abdbd57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.557295 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3aa249cd-45ec-4593-a974-3b909abdbd57" (UID: "3aa249cd-45ec-4593-a974-3b909abdbd57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.590775 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.590806 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aa249cd-45ec-4593-a974-3b909abdbd57-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:16 crc kubenswrapper[4861]: I0310 20:20:16.590821 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjtv5\" (UniqueName: \"kubernetes.io/projected/3aa249cd-45ec-4593-a974-3b909abdbd57-kube-api-access-zjtv5\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.008970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"515ad971-29e5-4b2a-ab58-2e06d4d29c99","Type":"ContainerStarted","Data":"372ad1b24030db4e642e96661fc9b0796b936e7d256bbd393dda56e6701eea1d"} Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.011686 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" event={"ID":"3aa249cd-45ec-4593-a974-3b909abdbd57","Type":"ContainerDied","Data":"7d9a1b43a1054f6d030ceca2eab34de72ce61a0b243b6f44b2e15aff0cf722f6"} Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.011777 4861 scope.go:117] "RemoveContainer" containerID="68019a020b1d65d7bb0ce56083c934c12e0c843cbd4083fb25a4b84ab1783111" Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.011897 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.038352 4861 scope.go:117] "RemoveContainer" containerID="03eed1693bb2f311edae051070bb719aaa307c00134c5f9a77b2c1122c211c9e" Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.046006 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=4.693848995 podStartE2EDuration="8.045980395s" podCreationTimestamp="2026-03-10 20:20:09 +0000 UTC" firstStartedPulling="2026-03-10 20:20:12.992030637 +0000 UTC m=+5556.755466607" lastFinishedPulling="2026-03-10 20:20:16.344162037 +0000 UTC m=+5560.107598007" observedRunningTime="2026-03-10 20:20:17.033739322 +0000 UTC m=+5560.797175312" watchObservedRunningTime="2026-03-10 20:20:17.045980395 +0000 UTC m=+5560.809416395" Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.061250 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.067935 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-hzp8x"] Mar 10 20:20:17 crc kubenswrapper[4861]: I0310 20:20:17.960876 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:20:17 crc kubenswrapper[4861]: E0310 20:20:17.961451 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:20:18 crc kubenswrapper[4861]: I0310 20:20:18.978310 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" path="/var/lib/kubelet/pods/3aa249cd-45ec-4593-a974-3b909abdbd57/volumes" Mar 10 20:20:20 crc kubenswrapper[4861]: I0310 20:20:20.011884 4861 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-66d5bf7c87-hzp8x" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.38:5353: i/o timeout" Mar 10 20:20:22 crc kubenswrapper[4861]: I0310 20:20:22.808326 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:22 crc kubenswrapper[4861]: I0310 20:20:22.808478 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:22 crc kubenswrapper[4861]: I0310 20:20:22.886492 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.144256 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gkzl" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.239388 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gkzl"] Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.317781 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.318184 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9fhh8" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="registry-server" containerID="cri-o://59f908c36b4314496d8c9fae7f59e7ef2bc005d479b2c6ed9bfc27e8aa325dec" gracePeriod=2 Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.330732 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 10 20:20:23 crc kubenswrapper[4861]: E0310 20:20:23.331074 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="dnsmasq-dns" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.331090 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="dnsmasq-dns" Mar 10 20:20:23 crc kubenswrapper[4861]: E0310 20:20:23.331104 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="init" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.331109 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="init" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.331930 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa249cd-45ec-4593-a974-3b909abdbd57" containerName="dnsmasq-dns" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.332785 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.337512 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.337763 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-svcdv" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.337896 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.338013 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.349229 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.427918 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-config\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428220 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428338 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwbb\" (UniqueName: \"kubernetes.io/projected/da0f7a17-f584-4acc-8fdd-96bb631bce12-kube-api-access-mbwbb\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428379 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-scripts\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428407 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.428435 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529681 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwbb\" (UniqueName: \"kubernetes.io/projected/da0f7a17-f584-4acc-8fdd-96bb631bce12-kube-api-access-mbwbb\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529769 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-scripts\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529818 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529858 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-config\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.529927 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.530638 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.531398 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-config\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.533627 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da0f7a17-f584-4acc-8fdd-96bb631bce12-scripts\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.536494 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.537240 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.541635 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f7a17-f584-4acc-8fdd-96bb631bce12-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.560499 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwbb\" (UniqueName: \"kubernetes.io/projected/da0f7a17-f584-4acc-8fdd-96bb631bce12-kube-api-access-mbwbb\") pod \"ovn-northd-0\" (UID: \"da0f7a17-f584-4acc-8fdd-96bb631bce12\") " pod="openstack/ovn-northd-0" Mar 10 20:20:23 crc kubenswrapper[4861]: I0310 20:20:23.659972 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 10 20:20:24 crc kubenswrapper[4861]: I0310 20:20:24.093700 4861 generic.go:334] "Generic (PLEG): container finished" podID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerID="59f908c36b4314496d8c9fae7f59e7ef2bc005d479b2c6ed9bfc27e8aa325dec" exitCode=0 Mar 10 20:20:24 crc kubenswrapper[4861]: I0310 20:20:24.093756 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerDied","Data":"59f908c36b4314496d8c9fae7f59e7ef2bc005d479b2c6ed9bfc27e8aa325dec"} Mar 10 20:20:24 crc kubenswrapper[4861]: I0310 20:20:24.127377 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 10 20:20:24 crc kubenswrapper[4861]: I0310 20:20:24.969262 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.053354 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content\") pod \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.053419 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm7r8\" (UniqueName: \"kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8\") pod \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.053460 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities\") pod \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\" (UID: \"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac\") " Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.055016 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities" (OuterVolumeSpecName: "utilities") pod "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" (UID: "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.060897 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8" (OuterVolumeSpecName: "kube-api-access-wm7r8") pod "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" (UID: "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac"). InnerVolumeSpecName "kube-api-access-wm7r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.105936 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"da0f7a17-f584-4acc-8fdd-96bb631bce12","Type":"ContainerStarted","Data":"344ae8a0d51ed058240fabb6ba5a46b45f0f499ac5e4b60edb772c874e38577a"} Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.105992 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"da0f7a17-f584-4acc-8fdd-96bb631bce12","Type":"ContainerStarted","Data":"f9289f93ca3446365d817b0936bf145e376ab06d72ee0e4a2d2b0ad1240626eb"} Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.106002 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"da0f7a17-f584-4acc-8fdd-96bb631bce12","Type":"ContainerStarted","Data":"0cc0fa2774c350fa75fdf5fc3dfbbadf17034868651ab40b15d1ac9606d067ef"} Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.107019 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.109461 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fhh8" event={"ID":"022d8016-dfa5-401c-8b42-4dbf2fd7a6ac","Type":"ContainerDied","Data":"c0fa3fc3c82323f6fc71e47d7583aa9fe298d6b9413530a13391b455164e97c7"} Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.109504 4861 scope.go:117] "RemoveContainer" containerID="59f908c36b4314496d8c9fae7f59e7ef2bc005d479b2c6ed9bfc27e8aa325dec" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.109505 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fhh8" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.137155 4861 scope.go:117] "RemoveContainer" containerID="62eeef287ea1d5338f9a36786acaa07e630440104b486ae5e45ee40de3434cc5" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.155284 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm7r8\" (UniqueName: \"kubernetes.io/projected/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-kube-api-access-wm7r8\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.155323 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.170525 4861 scope.go:117] "RemoveContainer" containerID="be23695f9c2c0fdf8937e1c26a55dc6a6f1cd18510e84be0293688fe726ea67d" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.202539 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" (UID: "022d8016-dfa5-401c-8b42-4dbf2fd7a6ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.256808 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.438422 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.438405823 podStartE2EDuration="2.438405823s" podCreationTimestamp="2026-03-10 20:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:25.129844897 +0000 UTC m=+5568.893280877" watchObservedRunningTime="2026-03-10 20:20:25.438405823 +0000 UTC m=+5569.201841783" Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.439964 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 20:20:25 crc kubenswrapper[4861]: I0310 20:20:25.445115 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9fhh8"] Mar 10 20:20:26 crc kubenswrapper[4861]: I0310 20:20:26.968790 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" path="/var/lib/kubelet/pods/022d8016-dfa5-401c-8b42-4dbf2fd7a6ac/volumes" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.528500 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mqv7b"] Mar 10 20:20:28 crc kubenswrapper[4861]: E0310 20:20:28.529006 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="extract-utilities" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.529028 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="extract-utilities" Mar 10 20:20:28 crc kubenswrapper[4861]: E0310 20:20:28.529055 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="extract-content" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.529067 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="extract-content" Mar 10 20:20:28 crc kubenswrapper[4861]: E0310 20:20:28.529086 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="registry-server" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.529099 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="registry-server" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.529371 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="022d8016-dfa5-401c-8b42-4dbf2fd7a6ac" containerName="registry-server" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.530185 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.540367 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-231e-account-create-update-f5zv6"] Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.541604 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.544441 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.574158 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-231e-account-create-update-f5zv6"] Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.603134 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mqv7b"] Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.621564 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.621912 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.621962 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrck2\" (UniqueName: \"kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.621990 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7ldj\" (UniqueName: \"kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.723007 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.723092 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.723139 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrck2\" (UniqueName: \"kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.723167 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7ldj\" (UniqueName: \"kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.723645 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.724097 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.746888 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrck2\" (UniqueName: \"kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2\") pod \"keystone-db-create-mqv7b\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.760400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7ldj\" (UniqueName: \"kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj\") pod \"keystone-231e-account-create-update-f5zv6\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.864663 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.884648 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:28 crc kubenswrapper[4861]: I0310 20:20:28.958327 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:20:28 crc kubenswrapper[4861]: E0310 20:20:28.958617 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:20:29 crc kubenswrapper[4861]: I0310 20:20:29.381088 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-231e-account-create-update-f5zv6"] Mar 10 20:20:29 crc kubenswrapper[4861]: W0310 20:20:29.387173 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d173a3f_df88_4daf_87af_62cf32d77b78.slice/crio-fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e WatchSource:0}: Error finding container fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e: Status 404 returned error can't find the container with id fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e Mar 10 20:20:29 crc kubenswrapper[4861]: I0310 20:20:29.464245 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mqv7b"] Mar 10 20:20:29 crc kubenswrapper[4861]: W0310 20:20:29.483585 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26bb2aa0_94b7_44a0_bff4_268c1c365a71.slice/crio-70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355 WatchSource:0}: Error finding container 70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355: Status 404 returned error can't find the container with id 70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355 Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.161063 4861 generic.go:334] "Generic (PLEG): container finished" podID="26bb2aa0-94b7-44a0-bff4-268c1c365a71" containerID="41cea49b4bfa514e9bcd1f72a50ce7b73d71a3ffbfb12d737c4d3e5f1dc39ba6" exitCode=0 Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.161151 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mqv7b" event={"ID":"26bb2aa0-94b7-44a0-bff4-268c1c365a71","Type":"ContainerDied","Data":"41cea49b4bfa514e9bcd1f72a50ce7b73d71a3ffbfb12d737c4d3e5f1dc39ba6"} Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.161466 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mqv7b" event={"ID":"26bb2aa0-94b7-44a0-bff4-268c1c365a71","Type":"ContainerStarted","Data":"70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355"} Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.164289 4861 generic.go:334] "Generic (PLEG): container finished" podID="6d173a3f-df88-4daf-87af-62cf32d77b78" containerID="3be3ba0f15434ec9ff7f293cbfe0ad98f563d01d3f49498d42323756f4bdc2a0" exitCode=0 Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.164330 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-231e-account-create-update-f5zv6" event={"ID":"6d173a3f-df88-4daf-87af-62cf32d77b78","Type":"ContainerDied","Data":"3be3ba0f15434ec9ff7f293cbfe0ad98f563d01d3f49498d42323756f4bdc2a0"} Mar 10 20:20:30 crc kubenswrapper[4861]: I0310 20:20:30.164357 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-231e-account-create-update-f5zv6" event={"ID":"6d173a3f-df88-4daf-87af-62cf32d77b78","Type":"ContainerStarted","Data":"fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e"} Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.700930 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.711173 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.804789 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts\") pod \"6d173a3f-df88-4daf-87af-62cf32d77b78\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.804862 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7ldj\" (UniqueName: \"kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj\") pod \"6d173a3f-df88-4daf-87af-62cf32d77b78\" (UID: \"6d173a3f-df88-4daf-87af-62cf32d77b78\") " Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.804898 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts\") pod \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.804936 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrck2\" (UniqueName: \"kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2\") pod \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\" (UID: \"26bb2aa0-94b7-44a0-bff4-268c1c365a71\") " Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.806198 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d173a3f-df88-4daf-87af-62cf32d77b78" (UID: "6d173a3f-df88-4daf-87af-62cf32d77b78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.806206 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26bb2aa0-94b7-44a0-bff4-268c1c365a71" (UID: "26bb2aa0-94b7-44a0-bff4-268c1c365a71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.816031 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2" (OuterVolumeSpecName: "kube-api-access-qrck2") pod "26bb2aa0-94b7-44a0-bff4-268c1c365a71" (UID: "26bb2aa0-94b7-44a0-bff4-268c1c365a71"). InnerVolumeSpecName "kube-api-access-qrck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.816174 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj" (OuterVolumeSpecName: "kube-api-access-v7ldj") pod "6d173a3f-df88-4daf-87af-62cf32d77b78" (UID: "6d173a3f-df88-4daf-87af-62cf32d77b78"). InnerVolumeSpecName "kube-api-access-v7ldj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.907126 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d173a3f-df88-4daf-87af-62cf32d77b78-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.907181 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7ldj\" (UniqueName: \"kubernetes.io/projected/6d173a3f-df88-4daf-87af-62cf32d77b78-kube-api-access-v7ldj\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.907209 4861 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bb2aa0-94b7-44a0-bff4-268c1c365a71-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:31 crc kubenswrapper[4861]: I0310 20:20:31.907229 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrck2\" (UniqueName: \"kubernetes.io/projected/26bb2aa0-94b7-44a0-bff4-268c1c365a71-kube-api-access-qrck2\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.187886 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mqv7b" event={"ID":"26bb2aa0-94b7-44a0-bff4-268c1c365a71","Type":"ContainerDied","Data":"70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355"} Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.188557 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70afc3135f4f7569f91c2cda5cf2466142239dcb42498bbc7cc633b4eb236355" Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.187945 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mqv7b" Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.190106 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-231e-account-create-update-f5zv6" event={"ID":"6d173a3f-df88-4daf-87af-62cf32d77b78","Type":"ContainerDied","Data":"fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e"} Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.190160 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdfac97533fecfb1c12e0a3b9036c5b0e95c9876321f6aee0ba2c195aa55732e" Mar 10 20:20:32 crc kubenswrapper[4861]: I0310 20:20:32.190220 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-231e-account-create-update-f5zv6" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.010568 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nfr8l"] Mar 10 20:20:34 crc kubenswrapper[4861]: E0310 20:20:34.010963 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bb2aa0-94b7-44a0-bff4-268c1c365a71" containerName="mariadb-database-create" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.010982 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bb2aa0-94b7-44a0-bff4-268c1c365a71" containerName="mariadb-database-create" Mar 10 20:20:34 crc kubenswrapper[4861]: E0310 20:20:34.011016 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d173a3f-df88-4daf-87af-62cf32d77b78" containerName="mariadb-account-create-update" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.011024 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d173a3f-df88-4daf-87af-62cf32d77b78" containerName="mariadb-account-create-update" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.011205 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bb2aa0-94b7-44a0-bff4-268c1c365a71" containerName="mariadb-database-create" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.011231 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d173a3f-df88-4daf-87af-62cf32d77b78" containerName="mariadb-account-create-update" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.011841 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.016935 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mnn46" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.017158 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.017370 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.017588 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.022177 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nfr8l"] Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.158551 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6w54\" (UniqueName: \"kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.158739 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.158794 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.261242 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6w54\" (UniqueName: \"kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.261488 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.261575 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.268833 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.269494 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.294771 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6w54\" (UniqueName: \"kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54\") pod \"keystone-db-sync-nfr8l\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.334806 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:34 crc kubenswrapper[4861]: I0310 20:20:34.816505 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nfr8l"] Mar 10 20:20:34 crc kubenswrapper[4861]: W0310 20:20:34.822366 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb2632a_e4c0_43f6_93f8_ae18f587ffda.slice/crio-f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d WatchSource:0}: Error finding container f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d: Status 404 returned error can't find the container with id f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d Mar 10 20:20:35 crc kubenswrapper[4861]: I0310 20:20:35.218727 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nfr8l" event={"ID":"cdb2632a-e4c0-43f6-93f8-ae18f587ffda","Type":"ContainerStarted","Data":"559f8e05057af6f9df4da671de3dc76cceb4dcbbf6429e9ffdb6f702155a7787"} Mar 10 20:20:35 crc kubenswrapper[4861]: I0310 20:20:35.218777 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nfr8l" event={"ID":"cdb2632a-e4c0-43f6-93f8-ae18f587ffda","Type":"ContainerStarted","Data":"f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d"} Mar 10 20:20:35 crc kubenswrapper[4861]: I0310 20:20:35.273921 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nfr8l" podStartSLOduration=2.27389078 podStartE2EDuration="2.27389078s" podCreationTimestamp="2026-03-10 20:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:35.264370891 +0000 UTC m=+5579.027806891" watchObservedRunningTime="2026-03-10 20:20:35.27389078 +0000 UTC m=+5579.037326770" Mar 10 20:20:37 crc kubenswrapper[4861]: I0310 20:20:37.241809 4861 generic.go:334] "Generic (PLEG): container finished" podID="cdb2632a-e4c0-43f6-93f8-ae18f587ffda" containerID="559f8e05057af6f9df4da671de3dc76cceb4dcbbf6429e9ffdb6f702155a7787" exitCode=0 Mar 10 20:20:37 crc kubenswrapper[4861]: I0310 20:20:37.241879 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nfr8l" event={"ID":"cdb2632a-e4c0-43f6-93f8-ae18f587ffda","Type":"ContainerDied","Data":"559f8e05057af6f9df4da671de3dc76cceb4dcbbf6429e9ffdb6f702155a7787"} Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.675087 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.757315 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data\") pod \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.757479 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle\") pod \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.757798 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6w54\" (UniqueName: \"kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54\") pod \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\" (UID: \"cdb2632a-e4c0-43f6-93f8-ae18f587ffda\") " Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.769085 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54" (OuterVolumeSpecName: "kube-api-access-z6w54") pod "cdb2632a-e4c0-43f6-93f8-ae18f587ffda" (UID: "cdb2632a-e4c0-43f6-93f8-ae18f587ffda"). InnerVolumeSpecName "kube-api-access-z6w54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.790855 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdb2632a-e4c0-43f6-93f8-ae18f587ffda" (UID: "cdb2632a-e4c0-43f6-93f8-ae18f587ffda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.824412 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data" (OuterVolumeSpecName: "config-data") pod "cdb2632a-e4c0-43f6-93f8-ae18f587ffda" (UID: "cdb2632a-e4c0-43f6-93f8-ae18f587ffda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.860159 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6w54\" (UniqueName: \"kubernetes.io/projected/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-kube-api-access-z6w54\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.860203 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:38 crc kubenswrapper[4861]: I0310 20:20:38.860219 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb2632a-e4c0-43f6-93f8-ae18f587ffda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.265911 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nfr8l" event={"ID":"cdb2632a-e4c0-43f6-93f8-ae18f587ffda","Type":"ContainerDied","Data":"f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d"} Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.265967 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1754bc2ae2dd843660b62259566aa10c31a31447e050c135328f2f6c4a9708d" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.266045 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nfr8l" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.953080 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bc48dc8c-cqg68"] Mar 10 20:20:39 crc kubenswrapper[4861]: E0310 20:20:39.953932 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb2632a-e4c0-43f6-93f8-ae18f587ffda" containerName="keystone-db-sync" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.953950 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb2632a-e4c0-43f6-93f8-ae18f587ffda" containerName="keystone-db-sync" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.954107 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb2632a-e4c0-43f6-93f8-ae18f587ffda" containerName="keystone-db-sync" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.954952 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.960409 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:20:39 crc kubenswrapper[4861]: E0310 20:20:39.960895 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.972793 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ggkjn"] Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.973847 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.977287 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.977728 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mnn46" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.977933 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.978507 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979460 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-config\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979492 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-nb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979588 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-sb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979614 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7php\" (UniqueName: \"kubernetes.io/projected/33217fff-0cbb-41fc-bf58-cbe8324c4aab-kube-api-access-r7php\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979646 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-dns-svc\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.979768 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bc48dc8c-cqg68"] Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.987432 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 20:20:39 crc kubenswrapper[4861]: I0310 20:20:39.991539 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ggkjn"] Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081037 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-sb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081076 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7php\" (UniqueName: \"kubernetes.io/projected/33217fff-0cbb-41fc-bf58-cbe8324c4aab-kube-api-access-r7php\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081098 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081127 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-dns-svc\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081144 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081931 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxw65\" (UniqueName: \"kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081881 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-dns-svc\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.081961 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082021 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082100 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-config\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082125 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-nb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082136 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-sb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082161 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082665 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-config\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.082752 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33217fff-0cbb-41fc-bf58-cbe8324c4aab-ovsdbserver-nb\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.096663 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7php\" (UniqueName: \"kubernetes.io/projected/33217fff-0cbb-41fc-bf58-cbe8324c4aab-kube-api-access-r7php\") pod \"dnsmasq-dns-56bc48dc8c-cqg68\" (UID: \"33217fff-0cbb-41fc-bf58-cbe8324c4aab\") " pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183211 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183298 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183324 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183341 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxw65\" (UniqueName: \"kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183369 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.183384 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.193167 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.193262 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.193470 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.197446 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.197847 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.198236 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxw65\" (UniqueName: \"kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65\") pod \"keystone-bootstrap-ggkjn\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.290405 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.299140 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.765326 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ggkjn"] Mar 10 20:20:40 crc kubenswrapper[4861]: I0310 20:20:40.855099 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bc48dc8c-cqg68"] Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.281099 4861 generic.go:334] "Generic (PLEG): container finished" podID="33217fff-0cbb-41fc-bf58-cbe8324c4aab" containerID="933e8decf51de706e5a3e7986b9c415bd62c2a0a185c354d04b2515cc8e80308" exitCode=0 Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.281129 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" event={"ID":"33217fff-0cbb-41fc-bf58-cbe8324c4aab","Type":"ContainerDied","Data":"933e8decf51de706e5a3e7986b9c415bd62c2a0a185c354d04b2515cc8e80308"} Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.281163 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" event={"ID":"33217fff-0cbb-41fc-bf58-cbe8324c4aab","Type":"ContainerStarted","Data":"5f0fe37f137d15e7ae103a2b881ede29d4fbbba732c4dbec951c67a6f33a3376"} Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.284000 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggkjn" event={"ID":"76304917-3c63-443d-b65a-ba75395de407","Type":"ContainerStarted","Data":"4f313c638a0446b35cc8842c73be9a1de4a9f41dd55b18002e82b64513ec8b37"} Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.284063 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggkjn" event={"ID":"76304917-3c63-443d-b65a-ba75395de407","Type":"ContainerStarted","Data":"10988e6fcde45241c96fa634e222ea3fcd5fee31480fe4c342e99d873e6a240a"} Mar 10 20:20:41 crc kubenswrapper[4861]: I0310 20:20:41.332616 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ggkjn" podStartSLOduration=2.3325968120000002 podStartE2EDuration="2.332596812s" podCreationTimestamp="2026-03-10 20:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:41.328673265 +0000 UTC m=+5585.092109215" watchObservedRunningTime="2026-03-10 20:20:41.332596812 +0000 UTC m=+5585.096032762" Mar 10 20:20:42 crc kubenswrapper[4861]: I0310 20:20:42.298267 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" event={"ID":"33217fff-0cbb-41fc-bf58-cbe8324c4aab","Type":"ContainerStarted","Data":"2858b60d8b8fb53a2a5e6f5d30aae723f36c16302c1e2aef6de43e5b0f6b0696"} Mar 10 20:20:42 crc kubenswrapper[4861]: I0310 20:20:42.299964 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:42 crc kubenswrapper[4861]: I0310 20:20:42.327439 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" podStartSLOduration=3.327412704 podStartE2EDuration="3.327412704s" podCreationTimestamp="2026-03-10 20:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:42.324979798 +0000 UTC m=+5586.088415798" watchObservedRunningTime="2026-03-10 20:20:42.327412704 +0000 UTC m=+5586.090848694" Mar 10 20:20:43 crc kubenswrapper[4861]: I0310 20:20:43.748260 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 10 20:20:44 crc kubenswrapper[4861]: I0310 20:20:44.323525 4861 generic.go:334] "Generic (PLEG): container finished" podID="76304917-3c63-443d-b65a-ba75395de407" containerID="4f313c638a0446b35cc8842c73be9a1de4a9f41dd55b18002e82b64513ec8b37" exitCode=0 Mar 10 20:20:44 crc kubenswrapper[4861]: I0310 20:20:44.323661 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggkjn" event={"ID":"76304917-3c63-443d-b65a-ba75395de407","Type":"ContainerDied","Data":"4f313c638a0446b35cc8842c73be9a1de4a9f41dd55b18002e82b64513ec8b37"} Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.837476 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.995116 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.995421 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.995616 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.995783 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.996571 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxw65\" (UniqueName: \"kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:45 crc kubenswrapper[4861]: I0310 20:20:45.996743 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts\") pod \"76304917-3c63-443d-b65a-ba75395de407\" (UID: \"76304917-3c63-443d-b65a-ba75395de407\") " Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.002774 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts" (OuterVolumeSpecName: "scripts") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.004131 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.004136 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65" (OuterVolumeSpecName: "kube-api-access-pxw65") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "kube-api-access-pxw65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.016009 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.037977 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.043403 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data" (OuterVolumeSpecName: "config-data") pod "76304917-3c63-443d-b65a-ba75395de407" (UID: "76304917-3c63-443d-b65a-ba75395de407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099456 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099484 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099493 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxw65\" (UniqueName: \"kubernetes.io/projected/76304917-3c63-443d-b65a-ba75395de407-kube-api-access-pxw65\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099502 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099511 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.099519 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76304917-3c63-443d-b65a-ba75395de407-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.354217 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ggkjn" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.354086 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ggkjn" event={"ID":"76304917-3c63-443d-b65a-ba75395de407","Type":"ContainerDied","Data":"10988e6fcde45241c96fa634e222ea3fcd5fee31480fe4c342e99d873e6a240a"} Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.354759 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10988e6fcde45241c96fa634e222ea3fcd5fee31480fe4c342e99d873e6a240a" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.458539 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ggkjn"] Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.465661 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ggkjn"] Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.536055 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f66jm"] Mar 10 20:20:46 crc kubenswrapper[4861]: E0310 20:20:46.537402 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76304917-3c63-443d-b65a-ba75395de407" containerName="keystone-bootstrap" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.537436 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="76304917-3c63-443d-b65a-ba75395de407" containerName="keystone-bootstrap" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.537776 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="76304917-3c63-443d-b65a-ba75395de407" containerName="keystone-bootstrap" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.539162 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.542524 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.543931 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.544583 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.544884 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mnn46" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.545376 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.553947 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f66jm"] Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.717446 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.717905 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.718099 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.718221 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.718394 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.718448 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k29n\" (UniqueName: \"kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820153 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820199 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820229 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820254 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k29n\" (UniqueName: \"kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.820316 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.826073 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.829580 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.830056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.836945 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.838039 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.851852 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k29n\" (UniqueName: \"kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n\") pod \"keystone-bootstrap-f66jm\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.909919 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:46 crc kubenswrapper[4861]: I0310 20:20:46.979025 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76304917-3c63-443d-b65a-ba75395de407" path="/var/lib/kubelet/pods/76304917-3c63-443d-b65a-ba75395de407/volumes" Mar 10 20:20:47 crc kubenswrapper[4861]: I0310 20:20:47.197566 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f66jm"] Mar 10 20:20:47 crc kubenswrapper[4861]: W0310 20:20:47.201927 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04798b5b_ca22_4161_8eec_fb253a6920ae.slice/crio-9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541 WatchSource:0}: Error finding container 9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541: Status 404 returned error can't find the container with id 9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541 Mar 10 20:20:47 crc kubenswrapper[4861]: I0310 20:20:47.361081 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f66jm" event={"ID":"04798b5b-ca22-4161-8eec-fb253a6920ae","Type":"ContainerStarted","Data":"9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541"} Mar 10 20:20:48 crc kubenswrapper[4861]: I0310 20:20:48.373598 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f66jm" event={"ID":"04798b5b-ca22-4161-8eec-fb253a6920ae","Type":"ContainerStarted","Data":"ce2c58116f4db9d5d03c7573d3e5071182570159afbba2679d4b6f18be416dce"} Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.297011 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56bc48dc8c-cqg68" Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.330273 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f66jm" podStartSLOduration=4.33024241 podStartE2EDuration="4.33024241s" podCreationTimestamp="2026-03-10 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:48.403581521 +0000 UTC m=+5592.167017531" watchObservedRunningTime="2026-03-10 20:20:50.33024241 +0000 UTC m=+5594.093678410" Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.392232 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.392565 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="dnsmasq-dns" containerID="cri-o://c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf" gracePeriod=10 Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.410810 4861 generic.go:334] "Generic (PLEG): container finished" podID="04798b5b-ca22-4161-8eec-fb253a6920ae" containerID="ce2c58116f4db9d5d03c7573d3e5071182570159afbba2679d4b6f18be416dce" exitCode=0 Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.410897 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f66jm" event={"ID":"04798b5b-ca22-4161-8eec-fb253a6920ae","Type":"ContainerDied","Data":"ce2c58116f4db9d5d03c7573d3e5071182570159afbba2679d4b6f18be416dce"} Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.846772 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.998679 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb\") pod \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.998802 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxr2\" (UniqueName: \"kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2\") pod \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.998832 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb\") pod \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.998900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config\") pod \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " Mar 10 20:20:50 crc kubenswrapper[4861]: I0310 20:20:50.998970 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc\") pod \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\" (UID: \"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf\") " Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.007992 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2" (OuterVolumeSpecName: "kube-api-access-pwxr2") pod "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" (UID: "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf"). InnerVolumeSpecName "kube-api-access-pwxr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.042292 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" (UID: "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.057099 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" (UID: "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.060812 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config" (OuterVolumeSpecName: "config") pod "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" (UID: "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.071029 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" (UID: "5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.100430 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.100458 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwxr2\" (UniqueName: \"kubernetes.io/projected/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-kube-api-access-pwxr2\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.100468 4861 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.100480 4861 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.100490 4861 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.425015 4861 generic.go:334] "Generic (PLEG): container finished" podID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerID="c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf" exitCode=0 Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.425099 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" event={"ID":"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf","Type":"ContainerDied","Data":"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf"} Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.425153 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" event={"ID":"5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf","Type":"ContainerDied","Data":"72247731b85473b4504f1634cbded5d51a810edc50e7b5c21b3b6adda25030fd"} Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.425072 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59dd4759-z89lm" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.425192 4861 scope.go:117] "RemoveContainer" containerID="c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.462685 4861 scope.go:117] "RemoveContainer" containerID="2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.483408 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.493929 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f59dd4759-z89lm"] Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.513633 4861 scope.go:117] "RemoveContainer" containerID="c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf" Mar 10 20:20:51 crc kubenswrapper[4861]: E0310 20:20:51.514353 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf\": container with ID starting with c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf not found: ID does not exist" containerID="c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.514435 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf"} err="failed to get container status \"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf\": rpc error: code = NotFound desc = could not find container \"c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf\": container with ID starting with c2c1781d48ddf7189d349a9f830c02132e8f43fb091df1d6799a103f60740edf not found: ID does not exist" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.514483 4861 scope.go:117] "RemoveContainer" containerID="2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2" Mar 10 20:20:51 crc kubenswrapper[4861]: E0310 20:20:51.514984 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2\": container with ID starting with 2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2 not found: ID does not exist" containerID="2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.515045 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2"} err="failed to get container status \"2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2\": rpc error: code = NotFound desc = could not find container \"2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2\": container with ID starting with 2e84493bb17694d5f4bb95cd76eac170d6fb9d586a811af119a656b496b843a2 not found: ID does not exist" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.876769 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:51 crc kubenswrapper[4861]: I0310 20:20:51.958468 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:20:51 crc kubenswrapper[4861]: E0310 20:20:51.958930 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021446 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021625 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k29n\" (UniqueName: \"kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021655 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021697 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021791 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.021831 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys\") pod \"04798b5b-ca22-4161-8eec-fb253a6920ae\" (UID: \"04798b5b-ca22-4161-8eec-fb253a6920ae\") " Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.027412 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.027625 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts" (OuterVolumeSpecName: "scripts") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.028274 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n" (OuterVolumeSpecName: "kube-api-access-7k29n") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "kube-api-access-7k29n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.028840 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.045000 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.054548 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data" (OuterVolumeSpecName: "config-data") pod "04798b5b-ca22-4161-8eec-fb253a6920ae" (UID: "04798b5b-ca22-4161-8eec-fb253a6920ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125848 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k29n\" (UniqueName: \"kubernetes.io/projected/04798b5b-ca22-4161-8eec-fb253a6920ae-kube-api-access-7k29n\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125895 4861 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-scripts\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125916 4861 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-config-data\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125935 4861 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125952 4861 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.125969 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04798b5b-ca22-4161-8eec-fb253a6920ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.439367 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f66jm" event={"ID":"04798b5b-ca22-4161-8eec-fb253a6920ae","Type":"ContainerDied","Data":"9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541"} Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.439452 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fec69cccb2f94abe53de6a627a3209c0937de6994d077f518c6fb4783717541" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.439528 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f66jm" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.570683 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f4454dc7f-r2rxb"] Mar 10 20:20:52 crc kubenswrapper[4861]: E0310 20:20:52.571153 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="dnsmasq-dns" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.571184 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="dnsmasq-dns" Mar 10 20:20:52 crc kubenswrapper[4861]: E0310 20:20:52.571221 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04798b5b-ca22-4161-8eec-fb253a6920ae" containerName="keystone-bootstrap" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.571235 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="04798b5b-ca22-4161-8eec-fb253a6920ae" containerName="keystone-bootstrap" Mar 10 20:20:52 crc kubenswrapper[4861]: E0310 20:20:52.571256 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="init" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.571269 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="init" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.571559 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="04798b5b-ca22-4161-8eec-fb253a6920ae" containerName="keystone-bootstrap" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.571613 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" containerName="dnsmasq-dns" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.572461 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.576321 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.576401 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.576703 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mnn46" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.576933 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.577396 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.578282 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.595372 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f4454dc7f-r2rxb"] Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738011 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-internal-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738090 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-public-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-config-data\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738151 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhk5g\" (UniqueName: \"kubernetes.io/projected/a1c3c19f-4213-434d-add9-eae0f9cdaadc-kube-api-access-nhk5g\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738317 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-scripts\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738392 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-fernet-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738432 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-credential-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.738457 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-combined-ca-bundle\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840175 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-credential-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840222 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-combined-ca-bundle\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840245 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-internal-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840291 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-public-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840329 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-config-data\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840349 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhk5g\" (UniqueName: \"kubernetes.io/projected/a1c3c19f-4213-434d-add9-eae0f9cdaadc-kube-api-access-nhk5g\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840401 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-scripts\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.840421 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-fernet-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.844083 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-scripts\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.844400 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-fernet-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.847153 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-credential-keys\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.847664 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-public-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.847699 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-config-data\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.851343 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-combined-ca-bundle\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.860261 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhk5g\" (UniqueName: \"kubernetes.io/projected/a1c3c19f-4213-434d-add9-eae0f9cdaadc-kube-api-access-nhk5g\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.865101 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c3c19f-4213-434d-add9-eae0f9cdaadc-internal-tls-certs\") pod \"keystone-5f4454dc7f-r2rxb\" (UID: \"a1c3c19f-4213-434d-add9-eae0f9cdaadc\") " pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.893185 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:52 crc kubenswrapper[4861]: I0310 20:20:52.967416 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf" path="/var/lib/kubelet/pods/5ddf2cea-0c33-4026-9c21-21cf8a1c3dbf/volumes" Mar 10 20:20:53 crc kubenswrapper[4861]: I0310 20:20:53.319680 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f4454dc7f-r2rxb"] Mar 10 20:20:53 crc kubenswrapper[4861]: I0310 20:20:53.454024 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f4454dc7f-r2rxb" event={"ID":"a1c3c19f-4213-434d-add9-eae0f9cdaadc","Type":"ContainerStarted","Data":"ec1895babad7b7a22c73cb43188eafec1b4b7d4b0e376f882617932d00c6e56d"} Mar 10 20:20:54 crc kubenswrapper[4861]: I0310 20:20:54.467372 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f4454dc7f-r2rxb" event={"ID":"a1c3c19f-4213-434d-add9-eae0f9cdaadc","Type":"ContainerStarted","Data":"cf91e0c3d113c7a537b6cd6a48078d2c590879911ccbb29e808c4f6911d5f1c8"} Mar 10 20:20:54 crc kubenswrapper[4861]: I0310 20:20:54.467813 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:20:54 crc kubenswrapper[4861]: I0310 20:20:54.500145 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f4454dc7f-r2rxb" podStartSLOduration=2.500121163 podStartE2EDuration="2.500121163s" podCreationTimestamp="2026-03-10 20:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:20:54.491895549 +0000 UTC m=+5598.255331519" watchObservedRunningTime="2026-03-10 20:20:54.500121163 +0000 UTC m=+5598.263557163" Mar 10 20:20:58 crc kubenswrapper[4861]: I0310 20:20:58.153486 4861 scope.go:117] "RemoveContainer" containerID="605c81f9b82753ffa9fc02eeb7df041b3925d1b11c15ec85c21f773ec9c9ff4e" Mar 10 20:21:02 crc kubenswrapper[4861]: I0310 20:21:02.958382 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:21:02 crc kubenswrapper[4861]: E0310 20:21:02.959566 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:21:16 crc kubenswrapper[4861]: I0310 20:21:16.972824 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:21:16 crc kubenswrapper[4861]: E0310 20:21:16.974221 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:21:24 crc kubenswrapper[4861]: I0310 20:21:24.266773 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f4454dc7f-r2rxb" Mar 10 20:21:27 crc kubenswrapper[4861]: I0310 20:21:27.959287 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:21:27 crc kubenswrapper[4861]: E0310 20:21:27.960502 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.484921 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.487003 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.490150 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.491263 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-m75t7" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.491605 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.507975 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.563221 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: E0310 20:21:28.563925 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-nxkdt openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="77b348a2-7d07-4404-be13-7f505804b6bc" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.569051 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.627270 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.628220 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.650475 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkdt\" (UniqueName: \"kubernetes.io/projected/77b348a2-7d07-4404-be13-7f505804b6bc-kube-api-access-nxkdt\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.650551 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.650795 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.651067 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.654978 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.753310 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xq7\" (UniqueName: \"kubernetes.io/projected/e79e436f-8808-4985-8359-2d6c89e65aef-kube-api-access-87xq7\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.753397 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.753595 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkdt\" (UniqueName: \"kubernetes.io/projected/77b348a2-7d07-4404-be13-7f505804b6bc-kube-api-access-nxkdt\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.753665 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.753880 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.754069 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.754193 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config-secret\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.754290 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.754988 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: E0310 20:21:28.757270 4861 projected.go:194] Error preparing data for projected volume kube-api-access-nxkdt for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (77b348a2-7d07-4404-be13-7f505804b6bc) does not match the UID in record. The object might have been deleted and then recreated Mar 10 20:21:28 crc kubenswrapper[4861]: E0310 20:21:28.757374 4861 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/77b348a2-7d07-4404-be13-7f505804b6bc-kube-api-access-nxkdt podName:77b348a2-7d07-4404-be13-7f505804b6bc nodeName:}" failed. No retries permitted until 2026-03-10 20:21:29.257350179 +0000 UTC m=+5633.020786149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxkdt" (UniqueName: "kubernetes.io/projected/77b348a2-7d07-4404-be13-7f505804b6bc-kube-api-access-nxkdt") pod "openstackclient" (UID: "77b348a2-7d07-4404-be13-7f505804b6bc") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (77b348a2-7d07-4404-be13-7f505804b6bc) does not match the UID in record. The object might have been deleted and then recreated Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.769260 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.775760 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret\") pod \"openstackclient\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.822444 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.826020 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="77b348a2-7d07-4404-be13-7f505804b6bc" podUID="e79e436f-8808-4985-8359-2d6c89e65aef" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.856638 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.856771 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config-secret\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.856865 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xq7\" (UniqueName: \"kubernetes.io/projected/e79e436f-8808-4985-8359-2d6c89e65aef-kube-api-access-87xq7\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.856899 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.858386 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.863221 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-openstack-config-secret\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.863926 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79e436f-8808-4985-8359-2d6c89e65aef-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.886190 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xq7\" (UniqueName: \"kubernetes.io/projected/e79e436f-8808-4985-8359-2d6c89e65aef-kube-api-access-87xq7\") pod \"openstackclient\" (UID: \"e79e436f-8808-4985-8359-2d6c89e65aef\") " pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.921331 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.953121 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.958776 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle\") pod \"77b348a2-7d07-4404-be13-7f505804b6bc\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.958870 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config\") pod \"77b348a2-7d07-4404-be13-7f505804b6bc\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.958989 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret\") pod \"77b348a2-7d07-4404-be13-7f505804b6bc\" (UID: \"77b348a2-7d07-4404-be13-7f505804b6bc\") " Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.959277 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxkdt\" (UniqueName: \"kubernetes.io/projected/77b348a2-7d07-4404-be13-7f505804b6bc-kube-api-access-nxkdt\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.960552 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "77b348a2-7d07-4404-be13-7f505804b6bc" (UID: "77b348a2-7d07-4404-be13-7f505804b6bc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.962580 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b348a2-7d07-4404-be13-7f505804b6bc" (UID: "77b348a2-7d07-4404-be13-7f505804b6bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.964423 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "77b348a2-7d07-4404-be13-7f505804b6bc" (UID: "77b348a2-7d07-4404-be13-7f505804b6bc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:21:28 crc kubenswrapper[4861]: I0310 20:21:28.977402 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b348a2-7d07-4404-be13-7f505804b6bc" path="/var/lib/kubelet/pods/77b348a2-7d07-4404-be13-7f505804b6bc/volumes" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.060365 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.060755 4861 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b348a2-7d07-4404-be13-7f505804b6bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.060771 4861 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/77b348a2-7d07-4404-be13-7f505804b6bc-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.252532 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.835734 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.837184 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e79e436f-8808-4985-8359-2d6c89e65aef","Type":"ContainerStarted","Data":"5b5f83ef16f9b3aa521d849aaddfe2f22d6f574a3fdb192bab0f826e708eec9c"} Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.837220 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e79e436f-8808-4985-8359-2d6c89e65aef","Type":"ContainerStarted","Data":"d2ee6f689d3a3d1cdef23a800e6909026ff9dd0dec0887b1abdb0bc38157666f"} Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.861837 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.861810923 podStartE2EDuration="1.861810923s" podCreationTimestamp="2026-03-10 20:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 20:21:29.858567565 +0000 UTC m=+5633.622003545" watchObservedRunningTime="2026-03-10 20:21:29.861810923 +0000 UTC m=+5633.625246893" Mar 10 20:21:29 crc kubenswrapper[4861]: I0310 20:21:29.862892 4861 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="77b348a2-7d07-4404-be13-7f505804b6bc" podUID="e79e436f-8808-4985-8359-2d6c89e65aef" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.691831 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.696843 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.707774 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.791279 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4t6\" (UniqueName: \"kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.791366 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.791525 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.892334 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.892451 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4t6\" (UniqueName: \"kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.892522 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.892946 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.892960 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:39 crc kubenswrapper[4861]: I0310 20:21:39.927181 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4t6\" (UniqueName: \"kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6\") pod \"redhat-marketplace-f6nwd\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.034374 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.522341 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.949437 4861 generic.go:334] "Generic (PLEG): container finished" podID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerID="7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0" exitCode=0 Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.949504 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerDied","Data":"7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0"} Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.949545 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerStarted","Data":"9776c478e3187f1f0f51521241fc1fa10108126982e4d80f3c10f097b6921caa"} Mar 10 20:21:40 crc kubenswrapper[4861]: I0310 20:21:40.958231 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:21:40 crc kubenswrapper[4861]: E0310 20:21:40.958877 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:21:41 crc kubenswrapper[4861]: I0310 20:21:41.963924 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerStarted","Data":"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d"} Mar 10 20:21:42 crc kubenswrapper[4861]: I0310 20:21:42.974175 4861 generic.go:334] "Generic (PLEG): container finished" podID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerID="040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d" exitCode=0 Mar 10 20:21:42 crc kubenswrapper[4861]: I0310 20:21:42.974263 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerDied","Data":"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d"} Mar 10 20:21:43 crc kubenswrapper[4861]: I0310 20:21:43.985962 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerStarted","Data":"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e"} Mar 10 20:21:44 crc kubenswrapper[4861]: I0310 20:21:44.020624 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f6nwd" podStartSLOduration=2.363101982 podStartE2EDuration="5.020605419s" podCreationTimestamp="2026-03-10 20:21:39 +0000 UTC" firstStartedPulling="2026-03-10 20:21:40.951761619 +0000 UTC m=+5644.715197609" lastFinishedPulling="2026-03-10 20:21:43.609265056 +0000 UTC m=+5647.372701046" observedRunningTime="2026-03-10 20:21:44.013060674 +0000 UTC m=+5647.776496664" watchObservedRunningTime="2026-03-10 20:21:44.020605419 +0000 UTC m=+5647.784041389" Mar 10 20:21:50 crc kubenswrapper[4861]: I0310 20:21:50.035153 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:50 crc kubenswrapper[4861]: I0310 20:21:50.035665 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:50 crc kubenswrapper[4861]: I0310 20:21:50.135082 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:50 crc kubenswrapper[4861]: I0310 20:21:50.270174 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:51 crc kubenswrapper[4861]: I0310 20:21:51.462786 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.089890 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f6nwd" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="registry-server" containerID="cri-o://9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e" gracePeriod=2 Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.659674 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.741032 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4t6\" (UniqueName: \"kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6\") pod \"1afee241-5afc-4557-8be8-8dbebecfe9fe\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.741220 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities\") pod \"1afee241-5afc-4557-8be8-8dbebecfe9fe\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.742655 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities" (OuterVolumeSpecName: "utilities") pod "1afee241-5afc-4557-8be8-8dbebecfe9fe" (UID: "1afee241-5afc-4557-8be8-8dbebecfe9fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.749997 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6" (OuterVolumeSpecName: "kube-api-access-sc4t6") pod "1afee241-5afc-4557-8be8-8dbebecfe9fe" (UID: "1afee241-5afc-4557-8be8-8dbebecfe9fe"). InnerVolumeSpecName "kube-api-access-sc4t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.843003 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content\") pod \"1afee241-5afc-4557-8be8-8dbebecfe9fe\" (UID: \"1afee241-5afc-4557-8be8-8dbebecfe9fe\") " Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.843372 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.843385 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4t6\" (UniqueName: \"kubernetes.io/projected/1afee241-5afc-4557-8be8-8dbebecfe9fe-kube-api-access-sc4t6\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.889460 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1afee241-5afc-4557-8be8-8dbebecfe9fe" (UID: "1afee241-5afc-4557-8be8-8dbebecfe9fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:21:52 crc kubenswrapper[4861]: I0310 20:21:52.944819 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1afee241-5afc-4557-8be8-8dbebecfe9fe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.105822 4861 generic.go:334] "Generic (PLEG): container finished" podID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerID="9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e" exitCode=0 Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.105898 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerDied","Data":"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e"} Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.106158 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f6nwd" event={"ID":"1afee241-5afc-4557-8be8-8dbebecfe9fe","Type":"ContainerDied","Data":"9776c478e3187f1f0f51521241fc1fa10108126982e4d80f3c10f097b6921caa"} Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.106237 4861 scope.go:117] "RemoveContainer" containerID="9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.105961 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f6nwd" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.142435 4861 scope.go:117] "RemoveContainer" containerID="040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.152471 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.166461 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f6nwd"] Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.174507 4861 scope.go:117] "RemoveContainer" containerID="7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.215134 4861 scope.go:117] "RemoveContainer" containerID="9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e" Mar 10 20:21:53 crc kubenswrapper[4861]: E0310 20:21:53.216079 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e\": container with ID starting with 9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e not found: ID does not exist" containerID="9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.216126 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e"} err="failed to get container status \"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e\": rpc error: code = NotFound desc = could not find container \"9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e\": container with ID starting with 9b83a1b40b5baf52c2437558ae90cb711b5fbc188a02c1a5b7f565379c17260e not found: ID does not exist" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.216156 4861 scope.go:117] "RemoveContainer" containerID="040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d" Mar 10 20:21:53 crc kubenswrapper[4861]: E0310 20:21:53.216672 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d\": container with ID starting with 040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d not found: ID does not exist" containerID="040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.216699 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d"} err="failed to get container status \"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d\": rpc error: code = NotFound desc = could not find container \"040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d\": container with ID starting with 040e4e59b072cd3c378d18be4e234f54a3e11c3a56ab93a19e0d8497ed5c525d not found: ID does not exist" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.216752 4861 scope.go:117] "RemoveContainer" containerID="7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0" Mar 10 20:21:53 crc kubenswrapper[4861]: E0310 20:21:53.217071 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0\": container with ID starting with 7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0 not found: ID does not exist" containerID="7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.217148 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0"} err="failed to get container status \"7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0\": rpc error: code = NotFound desc = could not find container \"7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0\": container with ID starting with 7c8022e5d1c7d321432a12420e14da594e505b77fda2a4282f31bd38c93fd5f0 not found: ID does not exist" Mar 10 20:21:53 crc kubenswrapper[4861]: I0310 20:21:53.958523 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:21:53 crc kubenswrapper[4861]: E0310 20:21:53.958717 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:21:54 crc kubenswrapper[4861]: I0310 20:21:54.972501 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" path="/var/lib/kubelet/pods/1afee241-5afc-4557-8be8-8dbebecfe9fe/volumes" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.161221 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552902-5fnqk"] Mar 10 20:22:00 crc kubenswrapper[4861]: E0310 20:22:00.162564 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="extract-content" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.162591 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="extract-content" Mar 10 20:22:00 crc kubenswrapper[4861]: E0310 20:22:00.162611 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="extract-utilities" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.162625 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="extract-utilities" Mar 10 20:22:00 crc kubenswrapper[4861]: E0310 20:22:00.162642 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="registry-server" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.162655 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="registry-server" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.163114 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afee241-5afc-4557-8be8-8dbebecfe9fe" containerName="registry-server" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.164199 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.167283 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.167782 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.168036 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.213095 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552902-5fnqk"] Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.280143 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-489mr\" (UniqueName: \"kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr\") pod \"auto-csr-approver-29552902-5fnqk\" (UID: \"3d9533f3-b5ba-4527-850d-b34f21f08bf9\") " pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.382234 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-489mr\" (UniqueName: \"kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr\") pod \"auto-csr-approver-29552902-5fnqk\" (UID: \"3d9533f3-b5ba-4527-850d-b34f21f08bf9\") " pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.419240 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-489mr\" (UniqueName: \"kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr\") pod \"auto-csr-approver-29552902-5fnqk\" (UID: \"3d9533f3-b5ba-4527-850d-b34f21f08bf9\") " pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:00 crc kubenswrapper[4861]: I0310 20:22:00.514858 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:01 crc kubenswrapper[4861]: I0310 20:22:01.136584 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552902-5fnqk"] Mar 10 20:22:01 crc kubenswrapper[4861]: I0310 20:22:01.188328 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" event={"ID":"3d9533f3-b5ba-4527-850d-b34f21f08bf9","Type":"ContainerStarted","Data":"bad9f60e4c2d2deb15d9c5895e49a4b7b68c8b060feb0495de530715ecd9a4b1"} Mar 10 20:22:03 crc kubenswrapper[4861]: I0310 20:22:03.210066 4861 generic.go:334] "Generic (PLEG): container finished" podID="3d9533f3-b5ba-4527-850d-b34f21f08bf9" containerID="0cbcd490cca913187f9fb226eb95a6850707251b42e856e5ebf255caef4213d3" exitCode=0 Mar 10 20:22:03 crc kubenswrapper[4861]: I0310 20:22:03.210136 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" event={"ID":"3d9533f3-b5ba-4527-850d-b34f21f08bf9","Type":"ContainerDied","Data":"0cbcd490cca913187f9fb226eb95a6850707251b42e856e5ebf255caef4213d3"} Mar 10 20:22:04 crc kubenswrapper[4861]: I0310 20:22:04.655916 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:04 crc kubenswrapper[4861]: I0310 20:22:04.777987 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-489mr\" (UniqueName: \"kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr\") pod \"3d9533f3-b5ba-4527-850d-b34f21f08bf9\" (UID: \"3d9533f3-b5ba-4527-850d-b34f21f08bf9\") " Mar 10 20:22:04 crc kubenswrapper[4861]: I0310 20:22:04.786089 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr" (OuterVolumeSpecName: "kube-api-access-489mr") pod "3d9533f3-b5ba-4527-850d-b34f21f08bf9" (UID: "3d9533f3-b5ba-4527-850d-b34f21f08bf9"). InnerVolumeSpecName "kube-api-access-489mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:22:04 crc kubenswrapper[4861]: I0310 20:22:04.880697 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-489mr\" (UniqueName: \"kubernetes.io/projected/3d9533f3-b5ba-4527-850d-b34f21f08bf9-kube-api-access-489mr\") on node \"crc\" DevicePath \"\"" Mar 10 20:22:05 crc kubenswrapper[4861]: I0310 20:22:05.248025 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" event={"ID":"3d9533f3-b5ba-4527-850d-b34f21f08bf9","Type":"ContainerDied","Data":"bad9f60e4c2d2deb15d9c5895e49a4b7b68c8b060feb0495de530715ecd9a4b1"} Mar 10 20:22:05 crc kubenswrapper[4861]: I0310 20:22:05.248062 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad9f60e4c2d2deb15d9c5895e49a4b7b68c8b060feb0495de530715ecd9a4b1" Mar 10 20:22:05 crc kubenswrapper[4861]: I0310 20:22:05.248119 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552902-5fnqk" Mar 10 20:22:05 crc kubenswrapper[4861]: I0310 20:22:05.730768 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552896-ft86l"] Mar 10 20:22:05 crc kubenswrapper[4861]: I0310 20:22:05.739979 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552896-ft86l"] Mar 10 20:22:06 crc kubenswrapper[4861]: I0310 20:22:06.967086 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:22:06 crc kubenswrapper[4861]: E0310 20:22:06.967896 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:22:06 crc kubenswrapper[4861]: I0310 20:22:06.974523 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b73f6e2-2d34-4f5b-a38a-4cc667165792" path="/var/lib/kubelet/pods/4b73f6e2-2d34-4f5b-a38a-4cc667165792/volumes" Mar 10 20:22:17 crc kubenswrapper[4861]: I0310 20:22:17.959171 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:22:17 crc kubenswrapper[4861]: E0310 20:22:17.960135 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:22:29 crc kubenswrapper[4861]: I0310 20:22:29.958296 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:22:30 crc kubenswrapper[4861]: I0310 20:22:30.856934 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2"} Mar 10 20:22:58 crc kubenswrapper[4861]: I0310 20:22:58.377336 4861 scope.go:117] "RemoveContainer" containerID="223675cdda1bf4fe85773a91fa23f535f2e9146aa14a9114bb4a2c1174a6c340" Mar 10 20:23:39 crc kubenswrapper[4861]: I0310 20:23:39.120126 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c4kgh"] Mar 10 20:23:39 crc kubenswrapper[4861]: I0310 20:23:39.129174 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c4kgh"] Mar 10 20:23:40 crc kubenswrapper[4861]: I0310 20:23:40.976113 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60616fa5-d557-49dc-850c-54b29c8080c7" path="/var/lib/kubelet/pods/60616fa5-d557-49dc-850c-54b29c8080c7/volumes" Mar 10 20:23:58 crc kubenswrapper[4861]: I0310 20:23:58.467749 4861 scope.go:117] "RemoveContainer" containerID="d0258047871923cfb0a0138e1b3ba6c4da5cd5fc860714ac875e342fd4a05195" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.158968 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552904-6thzw"] Mar 10 20:24:00 crc kubenswrapper[4861]: E0310 20:24:00.159578 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9533f3-b5ba-4527-850d-b34f21f08bf9" containerName="oc" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.159593 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9533f3-b5ba-4527-850d-b34f21f08bf9" containerName="oc" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.159801 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9533f3-b5ba-4527-850d-b34f21f08bf9" containerName="oc" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.160499 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.162413 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.163299 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.163608 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.172947 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552904-6thzw"] Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.209758 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zlp\" (UniqueName: \"kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp\") pod \"auto-csr-approver-29552904-6thzw\" (UID: \"104149ea-84b6-47a5-b894-06d4de416d7e\") " pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.311592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zlp\" (UniqueName: \"kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp\") pod \"auto-csr-approver-29552904-6thzw\" (UID: \"104149ea-84b6-47a5-b894-06d4de416d7e\") " pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.345981 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zlp\" (UniqueName: \"kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp\") pod \"auto-csr-approver-29552904-6thzw\" (UID: \"104149ea-84b6-47a5-b894-06d4de416d7e\") " pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.518526 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:00 crc kubenswrapper[4861]: I0310 20:24:00.993503 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552904-6thzw"] Mar 10 20:24:01 crc kubenswrapper[4861]: I0310 20:24:01.885433 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552904-6thzw" event={"ID":"104149ea-84b6-47a5-b894-06d4de416d7e","Type":"ContainerStarted","Data":"6a6203a649f34d8f73ca69aac296ac915bfb0996ca5caad7f01e35fb2cd7a612"} Mar 10 20:24:02 crc kubenswrapper[4861]: I0310 20:24:02.904899 4861 generic.go:334] "Generic (PLEG): container finished" podID="104149ea-84b6-47a5-b894-06d4de416d7e" containerID="7e5c262fc93ed279977bf01809e88de8edb409427cf91d9dc6b9fedc5c69c66c" exitCode=0 Mar 10 20:24:02 crc kubenswrapper[4861]: I0310 20:24:02.905222 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552904-6thzw" event={"ID":"104149ea-84b6-47a5-b894-06d4de416d7e","Type":"ContainerDied","Data":"7e5c262fc93ed279977bf01809e88de8edb409427cf91d9dc6b9fedc5c69c66c"} Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.390665 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.402420 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55zlp\" (UniqueName: \"kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp\") pod \"104149ea-84b6-47a5-b894-06d4de416d7e\" (UID: \"104149ea-84b6-47a5-b894-06d4de416d7e\") " Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.410011 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp" (OuterVolumeSpecName: "kube-api-access-55zlp") pod "104149ea-84b6-47a5-b894-06d4de416d7e" (UID: "104149ea-84b6-47a5-b894-06d4de416d7e"). InnerVolumeSpecName "kube-api-access-55zlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.503967 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55zlp\" (UniqueName: \"kubernetes.io/projected/104149ea-84b6-47a5-b894-06d4de416d7e-kube-api-access-55zlp\") on node \"crc\" DevicePath \"\"" Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.927190 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552904-6thzw" event={"ID":"104149ea-84b6-47a5-b894-06d4de416d7e","Type":"ContainerDied","Data":"6a6203a649f34d8f73ca69aac296ac915bfb0996ca5caad7f01e35fb2cd7a612"} Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.927237 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6203a649f34d8f73ca69aac296ac915bfb0996ca5caad7f01e35fb2cd7a612" Mar 10 20:24:04 crc kubenswrapper[4861]: I0310 20:24:04.927277 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552904-6thzw" Mar 10 20:24:05 crc kubenswrapper[4861]: I0310 20:24:05.500029 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552898-rhvsc"] Mar 10 20:24:05 crc kubenswrapper[4861]: I0310 20:24:05.511254 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552898-rhvsc"] Mar 10 20:24:06 crc kubenswrapper[4861]: I0310 20:24:06.981473 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5265e2ef-073c-4e73-9930-782e7535c43d" path="/var/lib/kubelet/pods/5265e2ef-073c-4e73-9930-782e7535c43d/volumes" Mar 10 20:24:51 crc kubenswrapper[4861]: I0310 20:24:51.991974 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:24:51 crc kubenswrapper[4861]: I0310 20:24:51.992870 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:24:58 crc kubenswrapper[4861]: I0310 20:24:58.558742 4861 scope.go:117] "RemoveContainer" containerID="bc0b22c0b0456ba8391bb8d54dbb2a34da190bd66d5e29e970cc9d84d5570068" Mar 10 20:25:21 crc kubenswrapper[4861]: I0310 20:25:21.992340 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:25:21 crc kubenswrapper[4861]: I0310 20:25:21.993156 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:25:51 crc kubenswrapper[4861]: I0310 20:25:51.992219 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:25:51 crc kubenswrapper[4861]: I0310 20:25:51.994603 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:25:51 crc kubenswrapper[4861]: I0310 20:25:51.994859 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:25:51 crc kubenswrapper[4861]: I0310 20:25:51.996019 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:25:51 crc kubenswrapper[4861]: I0310 20:25:51.996285 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2" gracePeriod=600 Mar 10 20:25:53 crc kubenswrapper[4861]: I0310 20:25:53.124107 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2" exitCode=0 Mar 10 20:25:53 crc kubenswrapper[4861]: I0310 20:25:53.124240 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2"} Mar 10 20:25:53 crc kubenswrapper[4861]: I0310 20:25:53.125327 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84"} Mar 10 20:25:53 crc kubenswrapper[4861]: I0310 20:25:53.125367 4861 scope.go:117] "RemoveContainer" containerID="cd2259ee04441075924dcee5b49ce2f64778d63f5c1b13672065b616c80497ca" Mar 10 20:25:58 crc kubenswrapper[4861]: I0310 20:25:58.681690 4861 scope.go:117] "RemoveContainer" containerID="75f834c089d67fb9db0e20e967a2da241011c41b1a8541cb50cdbb1cdbe9f41d" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.162740 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552906-flk57"] Mar 10 20:26:00 crc kubenswrapper[4861]: E0310 20:26:00.163587 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104149ea-84b6-47a5-b894-06d4de416d7e" containerName="oc" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.163609 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="104149ea-84b6-47a5-b894-06d4de416d7e" containerName="oc" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.163949 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="104149ea-84b6-47a5-b894-06d4de416d7e" containerName="oc" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.164831 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.168130 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.168216 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.168905 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.178232 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552906-flk57"] Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.226503 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czw8q\" (UniqueName: \"kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q\") pod \"auto-csr-approver-29552906-flk57\" (UID: \"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da\") " pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.329228 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czw8q\" (UniqueName: \"kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q\") pod \"auto-csr-approver-29552906-flk57\" (UID: \"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da\") " pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.367848 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czw8q\" (UniqueName: \"kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q\") pod \"auto-csr-approver-29552906-flk57\" (UID: \"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da\") " pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:00 crc kubenswrapper[4861]: I0310 20:26:00.492623 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:01 crc kubenswrapper[4861]: I0310 20:26:01.010813 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552906-flk57"] Mar 10 20:26:01 crc kubenswrapper[4861]: I0310 20:26:01.018027 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:26:01 crc kubenswrapper[4861]: I0310 20:26:01.209746 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552906-flk57" event={"ID":"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da","Type":"ContainerStarted","Data":"7ba5333f31f66fe6bbd6cbe7e721a0a5a56e74f200ffe507dec1d86f67dbeba6"} Mar 10 20:26:03 crc kubenswrapper[4861]: I0310 20:26:03.231118 4861 generic.go:334] "Generic (PLEG): container finished" podID="50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" containerID="6e13fa0b0dd807a1f5242aaaa231e10cac3fdfbab68ae5835294f794ca819ebf" exitCode=0 Mar 10 20:26:03 crc kubenswrapper[4861]: I0310 20:26:03.231737 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552906-flk57" event={"ID":"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da","Type":"ContainerDied","Data":"6e13fa0b0dd807a1f5242aaaa231e10cac3fdfbab68ae5835294f794ca819ebf"} Mar 10 20:26:04 crc kubenswrapper[4861]: I0310 20:26:04.669294 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:04 crc kubenswrapper[4861]: I0310 20:26:04.742016 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czw8q\" (UniqueName: \"kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q\") pod \"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da\" (UID: \"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da\") " Mar 10 20:26:04 crc kubenswrapper[4861]: I0310 20:26:04.747205 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q" (OuterVolumeSpecName: "kube-api-access-czw8q") pod "50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" (UID: "50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da"). InnerVolumeSpecName "kube-api-access-czw8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:26:04 crc kubenswrapper[4861]: I0310 20:26:04.843756 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czw8q\" (UniqueName: \"kubernetes.io/projected/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da-kube-api-access-czw8q\") on node \"crc\" DevicePath \"\"" Mar 10 20:26:05 crc kubenswrapper[4861]: I0310 20:26:05.259406 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552906-flk57" event={"ID":"50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da","Type":"ContainerDied","Data":"7ba5333f31f66fe6bbd6cbe7e721a0a5a56e74f200ffe507dec1d86f67dbeba6"} Mar 10 20:26:05 crc kubenswrapper[4861]: I0310 20:26:05.259802 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba5333f31f66fe6bbd6cbe7e721a0a5a56e74f200ffe507dec1d86f67dbeba6" Mar 10 20:26:05 crc kubenswrapper[4861]: I0310 20:26:05.259498 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552906-flk57" Mar 10 20:26:05 crc kubenswrapper[4861]: I0310 20:26:05.729287 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552900-knzjx"] Mar 10 20:26:05 crc kubenswrapper[4861]: I0310 20:26:05.735627 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552900-knzjx"] Mar 10 20:26:06 crc kubenswrapper[4861]: I0310 20:26:06.976074 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5d8759-dfce-43dc-9511-dd4d55b23b00" path="/var/lib/kubelet/pods/ab5d8759-dfce-43dc-9511-dd4d55b23b00/volumes" Mar 10 20:26:58 crc kubenswrapper[4861]: I0310 20:26:58.762474 4861 scope.go:117] "RemoveContainer" containerID="4f313c638a0446b35cc8842c73be9a1de4a9f41dd55b18002e82b64513ec8b37" Mar 10 20:26:58 crc kubenswrapper[4861]: I0310 20:26:58.816956 4861 scope.go:117] "RemoveContainer" containerID="d3a7b1fb7b52202ae49c22aafea068ec87b86130fe3adc846096983a0e3605f1" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.222630 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:10 crc kubenswrapper[4861]: E0310 20:27:10.223955 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" containerName="oc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.223977 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" containerName="oc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.224284 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" containerName="oc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.226382 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.255923 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.322353 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.322458 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.322534 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrnw\" (UniqueName: \"kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.423630 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.424116 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.424271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrnw\" (UniqueName: \"kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.424657 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.424976 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.444481 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrnw\" (UniqueName: \"kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw\") pod \"certified-operators-wd7xc\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:10 crc kubenswrapper[4861]: I0310 20:27:10.567387 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:11 crc kubenswrapper[4861]: I0310 20:27:11.052781 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:11 crc kubenswrapper[4861]: I0310 20:27:11.966371 4861 generic.go:334] "Generic (PLEG): container finished" podID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerID="9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816" exitCode=0 Mar 10 20:27:11 crc kubenswrapper[4861]: I0310 20:27:11.966423 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerDied","Data":"9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816"} Mar 10 20:27:11 crc kubenswrapper[4861]: I0310 20:27:11.966455 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerStarted","Data":"10fae1190c5e4f3097042a27e6d8669bfa54bfb4fddfb0d002f00f6431492068"} Mar 10 20:27:12 crc kubenswrapper[4861]: I0310 20:27:12.991953 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerStarted","Data":"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1"} Mar 10 20:27:14 crc kubenswrapper[4861]: I0310 20:27:14.002132 4861 generic.go:334] "Generic (PLEG): container finished" podID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerID="4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1" exitCode=0 Mar 10 20:27:14 crc kubenswrapper[4861]: I0310 20:27:14.002351 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerDied","Data":"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1"} Mar 10 20:27:15 crc kubenswrapper[4861]: I0310 20:27:15.018436 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerStarted","Data":"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749"} Mar 10 20:27:15 crc kubenswrapper[4861]: I0310 20:27:15.058920 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wd7xc" podStartSLOduration=2.577257983 podStartE2EDuration="5.058899231s" podCreationTimestamp="2026-03-10 20:27:10 +0000 UTC" firstStartedPulling="2026-03-10 20:27:11.968416748 +0000 UTC m=+5975.731852748" lastFinishedPulling="2026-03-10 20:27:14.450058026 +0000 UTC m=+5978.213493996" observedRunningTime="2026-03-10 20:27:15.049164679 +0000 UTC m=+5978.812600649" watchObservedRunningTime="2026-03-10 20:27:15.058899231 +0000 UTC m=+5978.822335201" Mar 10 20:27:20 crc kubenswrapper[4861]: I0310 20:27:20.567921 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:20 crc kubenswrapper[4861]: I0310 20:27:20.568331 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:20 crc kubenswrapper[4861]: I0310 20:27:20.657506 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:21 crc kubenswrapper[4861]: I0310 20:27:21.161951 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:21 crc kubenswrapper[4861]: I0310 20:27:21.406810 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.098620 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wd7xc" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="registry-server" containerID="cri-o://8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749" gracePeriod=2 Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.649331 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.661440 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmrnw\" (UniqueName: \"kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw\") pod \"bace392d-e6dc-4b4b-993e-a68aa271e57c\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.661584 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities\") pod \"bace392d-e6dc-4b4b-993e-a68aa271e57c\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.661658 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content\") pod \"bace392d-e6dc-4b4b-993e-a68aa271e57c\" (UID: \"bace392d-e6dc-4b4b-993e-a68aa271e57c\") " Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.662462 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities" (OuterVolumeSpecName: "utilities") pod "bace392d-e6dc-4b4b-993e-a68aa271e57c" (UID: "bace392d-e6dc-4b4b-993e-a68aa271e57c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.668980 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw" (OuterVolumeSpecName: "kube-api-access-vmrnw") pod "bace392d-e6dc-4b4b-993e-a68aa271e57c" (UID: "bace392d-e6dc-4b4b-993e-a68aa271e57c"). InnerVolumeSpecName "kube-api-access-vmrnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.764282 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmrnw\" (UniqueName: \"kubernetes.io/projected/bace392d-e6dc-4b4b-993e-a68aa271e57c-kube-api-access-vmrnw\") on node \"crc\" DevicePath \"\"" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.764321 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.844981 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bace392d-e6dc-4b4b-993e-a68aa271e57c" (UID: "bace392d-e6dc-4b4b-993e-a68aa271e57c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:27:23 crc kubenswrapper[4861]: I0310 20:27:23.866598 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bace392d-e6dc-4b4b-993e-a68aa271e57c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.114086 4861 generic.go:334] "Generic (PLEG): container finished" podID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerID="8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749" exitCode=0 Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.114150 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerDied","Data":"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749"} Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.114204 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd7xc" event={"ID":"bace392d-e6dc-4b4b-993e-a68aa271e57c","Type":"ContainerDied","Data":"10fae1190c5e4f3097042a27e6d8669bfa54bfb4fddfb0d002f00f6431492068"} Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.114199 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd7xc" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.114287 4861 scope.go:117] "RemoveContainer" containerID="8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.146880 4861 scope.go:117] "RemoveContainer" containerID="4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.179796 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.194651 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wd7xc"] Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.209584 4861 scope.go:117] "RemoveContainer" containerID="9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.236872 4861 scope.go:117] "RemoveContainer" containerID="8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749" Mar 10 20:27:24 crc kubenswrapper[4861]: E0310 20:27:24.237828 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749\": container with ID starting with 8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749 not found: ID does not exist" containerID="8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.237900 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749"} err="failed to get container status \"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749\": rpc error: code = NotFound desc = could not find container \"8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749\": container with ID starting with 8773655160c66e46e635e86edb69b82f41f43d9ddeda4da4353ef5873aadb749 not found: ID does not exist" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.237944 4861 scope.go:117] "RemoveContainer" containerID="4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1" Mar 10 20:27:24 crc kubenswrapper[4861]: E0310 20:27:24.238509 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1\": container with ID starting with 4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1 not found: ID does not exist" containerID="4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.238576 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1"} err="failed to get container status \"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1\": rpc error: code = NotFound desc = could not find container \"4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1\": container with ID starting with 4c1fc902f9908ce33215da19d33279e37b4292aedd4f70cb46353e00d199c8f1 not found: ID does not exist" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.238624 4861 scope.go:117] "RemoveContainer" containerID="9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816" Mar 10 20:27:24 crc kubenswrapper[4861]: E0310 20:27:24.239575 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816\": container with ID starting with 9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816 not found: ID does not exist" containerID="9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.239736 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816"} err="failed to get container status \"9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816\": rpc error: code = NotFound desc = could not find container \"9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816\": container with ID starting with 9233b9b9c1ce48588926532fbd2171443f364eae7bd5a67547cb616e6c8c6816 not found: ID does not exist" Mar 10 20:27:24 crc kubenswrapper[4861]: I0310 20:27:24.974581 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" path="/var/lib/kubelet/pods/bace392d-e6dc-4b4b-993e-a68aa271e57c/volumes" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.174002 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552908-2b8bk"] Mar 10 20:28:00 crc kubenswrapper[4861]: E0310 20:28:00.175484 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="extract-utilities" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.175510 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="extract-utilities" Mar 10 20:28:00 crc kubenswrapper[4861]: E0310 20:28:00.175558 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="registry-server" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.175570 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="registry-server" Mar 10 20:28:00 crc kubenswrapper[4861]: E0310 20:28:00.175584 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="extract-content" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.175597 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="extract-content" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.175952 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="bace392d-e6dc-4b4b-993e-a68aa271e57c" containerName="registry-server" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.177100 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.180242 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.184985 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.186026 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.196489 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552908-2b8bk"] Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.199214 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vqs\" (UniqueName: \"kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs\") pod \"auto-csr-approver-29552908-2b8bk\" (UID: \"09b6f18c-ee9d-459b-9f33-5301af9465f3\") " pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.304130 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vqs\" (UniqueName: \"kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs\") pod \"auto-csr-approver-29552908-2b8bk\" (UID: \"09b6f18c-ee9d-459b-9f33-5301af9465f3\") " pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.345503 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vqs\" (UniqueName: \"kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs\") pod \"auto-csr-approver-29552908-2b8bk\" (UID: \"09b6f18c-ee9d-459b-9f33-5301af9465f3\") " pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.512745 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:00 crc kubenswrapper[4861]: I0310 20:28:00.843317 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552908-2b8bk"] Mar 10 20:28:01 crc kubenswrapper[4861]: I0310 20:28:01.561309 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" event={"ID":"09b6f18c-ee9d-459b-9f33-5301af9465f3","Type":"ContainerStarted","Data":"1e78bba56dede9408d0ecc06a528614ca228b6ee86772d47e1e9760244f7cd28"} Mar 10 20:28:02 crc kubenswrapper[4861]: I0310 20:28:02.574365 4861 generic.go:334] "Generic (PLEG): container finished" podID="09b6f18c-ee9d-459b-9f33-5301af9465f3" containerID="ddda13501c00da04e7a25a72a8f01c1e03373db293000cfd65250c2cbc476072" exitCode=0 Mar 10 20:28:02 crc kubenswrapper[4861]: I0310 20:28:02.574452 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" event={"ID":"09b6f18c-ee9d-459b-9f33-5301af9465f3","Type":"ContainerDied","Data":"ddda13501c00da04e7a25a72a8f01c1e03373db293000cfd65250c2cbc476072"} Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.014096 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.083779 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vqs\" (UniqueName: \"kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs\") pod \"09b6f18c-ee9d-459b-9f33-5301af9465f3\" (UID: \"09b6f18c-ee9d-459b-9f33-5301af9465f3\") " Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.093393 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs" (OuterVolumeSpecName: "kube-api-access-f5vqs") pod "09b6f18c-ee9d-459b-9f33-5301af9465f3" (UID: "09b6f18c-ee9d-459b-9f33-5301af9465f3"). InnerVolumeSpecName "kube-api-access-f5vqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.187164 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5vqs\" (UniqueName: \"kubernetes.io/projected/09b6f18c-ee9d-459b-9f33-5301af9465f3-kube-api-access-f5vqs\") on node \"crc\" DevicePath \"\"" Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.599137 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" event={"ID":"09b6f18c-ee9d-459b-9f33-5301af9465f3","Type":"ContainerDied","Data":"1e78bba56dede9408d0ecc06a528614ca228b6ee86772d47e1e9760244f7cd28"} Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.599195 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e78bba56dede9408d0ecc06a528614ca228b6ee86772d47e1e9760244f7cd28" Mar 10 20:28:04 crc kubenswrapper[4861]: I0310 20:28:04.599215 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552908-2b8bk" Mar 10 20:28:05 crc kubenswrapper[4861]: I0310 20:28:05.120210 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552902-5fnqk"] Mar 10 20:28:05 crc kubenswrapper[4861]: I0310 20:28:05.131168 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552902-5fnqk"] Mar 10 20:28:06 crc kubenswrapper[4861]: I0310 20:28:06.974684 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9533f3-b5ba-4527-850d-b34f21f08bf9" path="/var/lib/kubelet/pods/3d9533f3-b5ba-4527-850d-b34f21f08bf9/volumes" Mar 10 20:28:21 crc kubenswrapper[4861]: I0310 20:28:21.992610 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:28:21 crc kubenswrapper[4861]: I0310 20:28:21.993340 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:28:51 crc kubenswrapper[4861]: I0310 20:28:51.992005 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:28:51 crc kubenswrapper[4861]: I0310 20:28:51.993092 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:28:58 crc kubenswrapper[4861]: I0310 20:28:58.975075 4861 scope.go:117] "RemoveContainer" containerID="0cbcd490cca913187f9fb226eb95a6850707251b42e856e5ebf255caef4213d3" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.223658 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:12 crc kubenswrapper[4861]: E0310 20:29:12.224771 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b6f18c-ee9d-459b-9f33-5301af9465f3" containerName="oc" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.224792 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b6f18c-ee9d-459b-9f33-5301af9465f3" containerName="oc" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.225073 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b6f18c-ee9d-459b-9f33-5301af9465f3" containerName="oc" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.227044 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.244422 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.337516 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.337834 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqr9\" (UniqueName: \"kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.338287 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.439796 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqr9\" (UniqueName: \"kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.440001 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.440080 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.440743 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.440888 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.466337 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqr9\" (UniqueName: \"kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9\") pod \"community-operators-pf8j7\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:12 crc kubenswrapper[4861]: I0310 20:29:12.593230 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:13 crc kubenswrapper[4861]: I0310 20:29:13.107994 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:13 crc kubenswrapper[4861]: I0310 20:29:13.352559 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerStarted","Data":"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0"} Mar 10 20:29:13 crc kubenswrapper[4861]: I0310 20:29:13.352609 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerStarted","Data":"445decd83970fd87dc4789bb5816f0e5217dc2e29e2d90b072afa2e6cf8b957d"} Mar 10 20:29:14 crc kubenswrapper[4861]: I0310 20:29:14.368596 4861 generic.go:334] "Generic (PLEG): container finished" podID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerID="ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0" exitCode=0 Mar 10 20:29:14 crc kubenswrapper[4861]: I0310 20:29:14.368817 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerDied","Data":"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0"} Mar 10 20:29:15 crc kubenswrapper[4861]: I0310 20:29:15.379285 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerStarted","Data":"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c"} Mar 10 20:29:16 crc kubenswrapper[4861]: I0310 20:29:16.402116 4861 generic.go:334] "Generic (PLEG): container finished" podID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerID="eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c" exitCode=0 Mar 10 20:29:16 crc kubenswrapper[4861]: I0310 20:29:16.402161 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerDied","Data":"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c"} Mar 10 20:29:17 crc kubenswrapper[4861]: I0310 20:29:17.413436 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerStarted","Data":"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111"} Mar 10 20:29:17 crc kubenswrapper[4861]: I0310 20:29:17.440366 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pf8j7" podStartSLOduration=2.992332353 podStartE2EDuration="5.440337584s" podCreationTimestamp="2026-03-10 20:29:12 +0000 UTC" firstStartedPulling="2026-03-10 20:29:14.372341637 +0000 UTC m=+6098.135777637" lastFinishedPulling="2026-03-10 20:29:16.820346878 +0000 UTC m=+6100.583782868" observedRunningTime="2026-03-10 20:29:17.437455947 +0000 UTC m=+6101.200891967" watchObservedRunningTime="2026-03-10 20:29:17.440337584 +0000 UTC m=+6101.203773574" Mar 10 20:29:21 crc kubenswrapper[4861]: I0310 20:29:21.991947 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:29:21 crc kubenswrapper[4861]: I0310 20:29:21.992587 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:29:21 crc kubenswrapper[4861]: I0310 20:29:21.992644 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:29:21 crc kubenswrapper[4861]: I0310 20:29:21.993635 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:29:21 crc kubenswrapper[4861]: I0310 20:29:21.993753 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" gracePeriod=600 Mar 10 20:29:22 crc kubenswrapper[4861]: E0310 20:29:22.125473 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.464389 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" exitCode=0 Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.464449 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84"} Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.464504 4861 scope.go:117] "RemoveContainer" containerID="94988353296d02fcb6ff0210a2189639e847fb03cc60fee0f780369ee12d56c2" Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.465400 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:29:22 crc kubenswrapper[4861]: E0310 20:29:22.465914 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.594207 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.595865 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:22 crc kubenswrapper[4861]: I0310 20:29:22.678114 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:23 crc kubenswrapper[4861]: I0310 20:29:23.603303 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:23 crc kubenswrapper[4861]: I0310 20:29:23.686725 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:25 crc kubenswrapper[4861]: I0310 20:29:25.498224 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pf8j7" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="registry-server" containerID="cri-o://1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111" gracePeriod=2 Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.076301 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.219917 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqr9\" (UniqueName: \"kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9\") pod \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.220300 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content\") pod \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.220685 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities\") pod \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\" (UID: \"b3c08693-42e4-4c95-acc4-5bf31f8e4780\") " Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.222007 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities" (OuterVolumeSpecName: "utilities") pod "b3c08693-42e4-4c95-acc4-5bf31f8e4780" (UID: "b3c08693-42e4-4c95-acc4-5bf31f8e4780"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.234095 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9" (OuterVolumeSpecName: "kube-api-access-clqr9") pod "b3c08693-42e4-4c95-acc4-5bf31f8e4780" (UID: "b3c08693-42e4-4c95-acc4-5bf31f8e4780"). InnerVolumeSpecName "kube-api-access-clqr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.309548 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3c08693-42e4-4c95-acc4-5bf31f8e4780" (UID: "b3c08693-42e4-4c95-acc4-5bf31f8e4780"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.323196 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.323238 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqr9\" (UniqueName: \"kubernetes.io/projected/b3c08693-42e4-4c95-acc4-5bf31f8e4780-kube-api-access-clqr9\") on node \"crc\" DevicePath \"\"" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.323259 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c08693-42e4-4c95-acc4-5bf31f8e4780-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.514795 4861 generic.go:334] "Generic (PLEG): container finished" podID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerID="1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111" exitCode=0 Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.514855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerDied","Data":"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111"} Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.514893 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf8j7" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.514926 4861 scope.go:117] "RemoveContainer" containerID="1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.514908 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf8j7" event={"ID":"b3c08693-42e4-4c95-acc4-5bf31f8e4780","Type":"ContainerDied","Data":"445decd83970fd87dc4789bb5816f0e5217dc2e29e2d90b072afa2e6cf8b957d"} Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.543963 4861 scope.go:117] "RemoveContainer" containerID="eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.572697 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.579941 4861 scope.go:117] "RemoveContainer" containerID="ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.585949 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pf8j7"] Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.632578 4861 scope.go:117] "RemoveContainer" containerID="1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111" Mar 10 20:29:26 crc kubenswrapper[4861]: E0310 20:29:26.633115 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111\": container with ID starting with 1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111 not found: ID does not exist" containerID="1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.633158 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111"} err="failed to get container status \"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111\": rpc error: code = NotFound desc = could not find container \"1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111\": container with ID starting with 1d2b193b5e66a35cb957c5c23025c4b78c0833117803fc9fb33eb56070252111 not found: ID does not exist" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.633192 4861 scope.go:117] "RemoveContainer" containerID="eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c" Mar 10 20:29:26 crc kubenswrapper[4861]: E0310 20:29:26.633581 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c\": container with ID starting with eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c not found: ID does not exist" containerID="eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.633627 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c"} err="failed to get container status \"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c\": rpc error: code = NotFound desc = could not find container \"eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c\": container with ID starting with eddb979a428c0c9fe0b4607e88276a3e84ca7536be9c8738e6a45bcbaaf5435c not found: ID does not exist" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.633651 4861 scope.go:117] "RemoveContainer" containerID="ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0" Mar 10 20:29:26 crc kubenswrapper[4861]: E0310 20:29:26.634112 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0\": container with ID starting with ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0 not found: ID does not exist" containerID="ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.634248 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0"} err="failed to get container status \"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0\": rpc error: code = NotFound desc = could not find container \"ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0\": container with ID starting with ed4fe0d75065d14d401b2e4f36bc99dab0f0ea6fc8f203538e7bdcb71ca3cfa0 not found: ID does not exist" Mar 10 20:29:26 crc kubenswrapper[4861]: I0310 20:29:26.975865 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" path="/var/lib/kubelet/pods/b3c08693-42e4-4c95-acc4-5bf31f8e4780/volumes" Mar 10 20:29:35 crc kubenswrapper[4861]: I0310 20:29:35.960559 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:29:35 crc kubenswrapper[4861]: E0310 20:29:35.961608 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:29:48 crc kubenswrapper[4861]: I0310 20:29:48.958616 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:29:48 crc kubenswrapper[4861]: E0310 20:29:48.959785 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.217886 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552910-gds55"] Mar 10 20:30:00 crc kubenswrapper[4861]: E0310 20:30:00.219066 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="extract-utilities" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.219089 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="extract-utilities" Mar 10 20:30:00 crc kubenswrapper[4861]: E0310 20:30:00.219118 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="registry-server" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.219131 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="registry-server" Mar 10 20:30:00 crc kubenswrapper[4861]: E0310 20:30:00.219157 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="extract-content" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.219169 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="extract-content" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.219482 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c08693-42e4-4c95-acc4-5bf31f8e4780" containerName="registry-server" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.220472 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.225427 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.225850 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.228516 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.232010 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r"] Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.234563 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.243412 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552910-gds55"] Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.257132 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.257409 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.257580 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r"] Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.325342 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.325437 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.325501 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flft\" (UniqueName: \"kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft\") pod \"auto-csr-approver-29552910-gds55\" (UID: \"683b2c94-abc2-4a99-961a-27f6a66761c2\") " pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.325602 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2psn\" (UniqueName: \"kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.427777 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flft\" (UniqueName: \"kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft\") pod \"auto-csr-approver-29552910-gds55\" (UID: \"683b2c94-abc2-4a99-961a-27f6a66761c2\") " pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.427869 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2psn\" (UniqueName: \"kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.428085 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.428171 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.429431 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.437521 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.447618 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2psn\" (UniqueName: \"kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn\") pod \"collect-profiles-29552910-gsd9r\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.449873 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flft\" (UniqueName: \"kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft\") pod \"auto-csr-approver-29552910-gds55\" (UID: \"683b2c94-abc2-4a99-961a-27f6a66761c2\") " pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.570361 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.579364 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:00 crc kubenswrapper[4861]: I0310 20:30:00.959021 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:30:00 crc kubenswrapper[4861]: E0310 20:30:00.959712 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.038563 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552910-gds55"] Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.108347 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r"] Mar 10 20:30:01 crc kubenswrapper[4861]: W0310 20:30:01.114484 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c5226c7_7299_441d_9edc_d271059517b2.slice/crio-cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e WatchSource:0}: Error finding container cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e: Status 404 returned error can't find the container with id cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.898984 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552910-gds55" event={"ID":"683b2c94-abc2-4a99-961a-27f6a66761c2","Type":"ContainerStarted","Data":"8ed55b6b16c44e1a74a30250e768eb5eb1e2585a5066c58a2e3d7d13a9e4ca0f"} Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.901981 4861 generic.go:334] "Generic (PLEG): container finished" podID="1c5226c7-7299-441d-9edc-d271059517b2" containerID="f66125c0d2bb76b34ad517347338e4f48dab176ebe6059af5a533dcc72d2d8f7" exitCode=0 Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.902027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" event={"ID":"1c5226c7-7299-441d-9edc-d271059517b2","Type":"ContainerDied","Data":"f66125c0d2bb76b34ad517347338e4f48dab176ebe6059af5a533dcc72d2d8f7"} Mar 10 20:30:01 crc kubenswrapper[4861]: I0310 20:30:01.902059 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" event={"ID":"1c5226c7-7299-441d-9edc-d271059517b2","Type":"ContainerStarted","Data":"cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e"} Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.306174 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.385398 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume\") pod \"1c5226c7-7299-441d-9edc-d271059517b2\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.385496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2psn\" (UniqueName: \"kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn\") pod \"1c5226c7-7299-441d-9edc-d271059517b2\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.385662 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume\") pod \"1c5226c7-7299-441d-9edc-d271059517b2\" (UID: \"1c5226c7-7299-441d-9edc-d271059517b2\") " Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.386896 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c5226c7-7299-441d-9edc-d271059517b2" (UID: "1c5226c7-7299-441d-9edc-d271059517b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.390411 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c5226c7-7299-441d-9edc-d271059517b2" (UID: "1c5226c7-7299-441d-9edc-d271059517b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.391327 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn" (OuterVolumeSpecName: "kube-api-access-l2psn") pod "1c5226c7-7299-441d-9edc-d271059517b2" (UID: "1c5226c7-7299-441d-9edc-d271059517b2"). InnerVolumeSpecName "kube-api-access-l2psn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.488698 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c5226c7-7299-441d-9edc-d271059517b2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.488747 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c5226c7-7299-441d-9edc-d271059517b2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.488758 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2psn\" (UniqueName: \"kubernetes.io/projected/1c5226c7-7299-441d-9edc-d271059517b2-kube-api-access-l2psn\") on node \"crc\" DevicePath \"\"" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.935037 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" event={"ID":"1c5226c7-7299-441d-9edc-d271059517b2","Type":"ContainerDied","Data":"cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e"} Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.935526 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc1952579b3b68f21dfb7a464880ec19a7f395f4342b1b3a8360cec509af23e" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.935701 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552910-gsd9r" Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.944179 4861 generic.go:334] "Generic (PLEG): container finished" podID="683b2c94-abc2-4a99-961a-27f6a66761c2" containerID="d386fb604c754eaee97af40ae773c25f0bf92612f8c3d2d8298c8b80a6bf1bf7" exitCode=0 Mar 10 20:30:03 crc kubenswrapper[4861]: I0310 20:30:03.944256 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552910-gds55" event={"ID":"683b2c94-abc2-4a99-961a-27f6a66761c2","Type":"ContainerDied","Data":"d386fb604c754eaee97af40ae773c25f0bf92612f8c3d2d8298c8b80a6bf1bf7"} Mar 10 20:30:04 crc kubenswrapper[4861]: I0310 20:30:04.407037 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz"] Mar 10 20:30:04 crc kubenswrapper[4861]: I0310 20:30:04.421921 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552865-k2rtz"] Mar 10 20:30:04 crc kubenswrapper[4861]: I0310 20:30:04.975793 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d3018b-fb64-43b4-9b5b-b25be20cf05f" path="/var/lib/kubelet/pods/04d3018b-fb64-43b4-9b5b-b25be20cf05f/volumes" Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.321898 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.425079 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8flft\" (UniqueName: \"kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft\") pod \"683b2c94-abc2-4a99-961a-27f6a66761c2\" (UID: \"683b2c94-abc2-4a99-961a-27f6a66761c2\") " Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.433596 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft" (OuterVolumeSpecName: "kube-api-access-8flft") pod "683b2c94-abc2-4a99-961a-27f6a66761c2" (UID: "683b2c94-abc2-4a99-961a-27f6a66761c2"). InnerVolumeSpecName "kube-api-access-8flft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.528362 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8flft\" (UniqueName: \"kubernetes.io/projected/683b2c94-abc2-4a99-961a-27f6a66761c2-kube-api-access-8flft\") on node \"crc\" DevicePath \"\"" Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.977770 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552910-gds55" event={"ID":"683b2c94-abc2-4a99-961a-27f6a66761c2","Type":"ContainerDied","Data":"8ed55b6b16c44e1a74a30250e768eb5eb1e2585a5066c58a2e3d7d13a9e4ca0f"} Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.977888 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed55b6b16c44e1a74a30250e768eb5eb1e2585a5066c58a2e3d7d13a9e4ca0f" Mar 10 20:30:05 crc kubenswrapper[4861]: I0310 20:30:05.977806 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552910-gds55" Mar 10 20:30:06 crc kubenswrapper[4861]: I0310 20:30:06.392054 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552904-6thzw"] Mar 10 20:30:06 crc kubenswrapper[4861]: I0310 20:30:06.405385 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552904-6thzw"] Mar 10 20:30:06 crc kubenswrapper[4861]: I0310 20:30:06.986600 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104149ea-84b6-47a5-b894-06d4de416d7e" path="/var/lib/kubelet/pods/104149ea-84b6-47a5-b894-06d4de416d7e/volumes" Mar 10 20:30:14 crc kubenswrapper[4861]: I0310 20:30:14.958472 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:30:14 crc kubenswrapper[4861]: E0310 20:30:14.959132 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:25 crc kubenswrapper[4861]: I0310 20:30:25.959814 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:30:25 crc kubenswrapper[4861]: E0310 20:30:25.961118 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.071656 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mqv7b"] Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.083343 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-231e-account-create-update-f5zv6"] Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.090600 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-231e-account-create-update-f5zv6"] Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.097284 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mqv7b"] Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.976954 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bb2aa0-94b7-44a0-bff4-268c1c365a71" path="/var/lib/kubelet/pods/26bb2aa0-94b7-44a0-bff4-268c1c365a71/volumes" Mar 10 20:30:32 crc kubenswrapper[4861]: I0310 20:30:32.978073 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d173a3f-df88-4daf-87af-62cf32d77b78" path="/var/lib/kubelet/pods/6d173a3f-df88-4daf-87af-62cf32d77b78/volumes" Mar 10 20:30:39 crc kubenswrapper[4861]: I0310 20:30:39.042287 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nfr8l"] Mar 10 20:30:39 crc kubenswrapper[4861]: I0310 20:30:39.059321 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nfr8l"] Mar 10 20:30:40 crc kubenswrapper[4861]: I0310 20:30:40.958839 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:30:40 crc kubenswrapper[4861]: E0310 20:30:40.959242 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:40 crc kubenswrapper[4861]: I0310 20:30:40.978444 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb2632a-e4c0-43f6-93f8-ae18f587ffda" path="/var/lib/kubelet/pods/cdb2632a-e4c0-43f6-93f8-ae18f587ffda/volumes" Mar 10 20:30:52 crc kubenswrapper[4861]: I0310 20:30:52.056509 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f66jm"] Mar 10 20:30:52 crc kubenswrapper[4861]: I0310 20:30:52.064114 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f66jm"] Mar 10 20:30:52 crc kubenswrapper[4861]: I0310 20:30:52.958227 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:30:52 crc kubenswrapper[4861]: E0310 20:30:52.958827 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:30:52 crc kubenswrapper[4861]: I0310 20:30:52.969295 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04798b5b-ca22-4161-8eec-fb253a6920ae" path="/var/lib/kubelet/pods/04798b5b-ca22-4161-8eec-fb253a6920ae/volumes" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.154469 4861 scope.go:117] "RemoveContainer" containerID="559f8e05057af6f9df4da671de3dc76cceb4dcbbf6429e9ffdb6f702155a7787" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.199355 4861 scope.go:117] "RemoveContainer" containerID="7e5c262fc93ed279977bf01809e88de8edb409427cf91d9dc6b9fedc5c69c66c" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.298834 4861 scope.go:117] "RemoveContainer" containerID="3be3ba0f15434ec9ff7f293cbfe0ad98f563d01d3f49498d42323756f4bdc2a0" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.316404 4861 scope.go:117] "RemoveContainer" containerID="ce2c58116f4db9d5d03c7573d3e5071182570159afbba2679d4b6f18be416dce" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.400182 4861 scope.go:117] "RemoveContainer" containerID="41cea49b4bfa514e9bcd1f72a50ce7b73d71a3ffbfb12d737c4d3e5f1dc39ba6" Mar 10 20:30:59 crc kubenswrapper[4861]: I0310 20:30:59.433152 4861 scope.go:117] "RemoveContainer" containerID="83fb41570d20f9736efbf7e862f77c7b7784626efeac27a74504134db5d2a9f2" Mar 10 20:31:03 crc kubenswrapper[4861]: I0310 20:31:03.959642 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:31:03 crc kubenswrapper[4861]: E0310 20:31:03.960938 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:31:18 crc kubenswrapper[4861]: I0310 20:31:18.958842 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:31:18 crc kubenswrapper[4861]: E0310 20:31:18.960149 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:31:31 crc kubenswrapper[4861]: I0310 20:31:31.958818 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:31:31 crc kubenswrapper[4861]: E0310 20:31:31.959993 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:31:46 crc kubenswrapper[4861]: I0310 20:31:46.975691 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:31:46 crc kubenswrapper[4861]: E0310 20:31:46.977162 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:31:58 crc kubenswrapper[4861]: I0310 20:31:58.958294 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:31:58 crc kubenswrapper[4861]: E0310 20:31:58.959021 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.160305 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552912-pvjdw"] Mar 10 20:32:00 crc kubenswrapper[4861]: E0310 20:32:00.160890 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683b2c94-abc2-4a99-961a-27f6a66761c2" containerName="oc" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.160919 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="683b2c94-abc2-4a99-961a-27f6a66761c2" containerName="oc" Mar 10 20:32:00 crc kubenswrapper[4861]: E0310 20:32:00.160965 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5226c7-7299-441d-9edc-d271059517b2" containerName="collect-profiles" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.160981 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5226c7-7299-441d-9edc-d271059517b2" containerName="collect-profiles" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.161297 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="683b2c94-abc2-4a99-961a-27f6a66761c2" containerName="oc" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.161332 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5226c7-7299-441d-9edc-d271059517b2" containerName="collect-profiles" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.162250 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.165940 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.166357 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.166699 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.179766 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552912-pvjdw"] Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.211328 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fbb\" (UniqueName: \"kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb\") pod \"auto-csr-approver-29552912-pvjdw\" (UID: \"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed\") " pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.314629 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fbb\" (UniqueName: \"kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb\") pod \"auto-csr-approver-29552912-pvjdw\" (UID: \"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed\") " pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.341259 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fbb\" (UniqueName: \"kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb\") pod \"auto-csr-approver-29552912-pvjdw\" (UID: \"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed\") " pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:00 crc kubenswrapper[4861]: I0310 20:32:00.497482 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:01 crc kubenswrapper[4861]: I0310 20:32:01.002342 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552912-pvjdw"] Mar 10 20:32:01 crc kubenswrapper[4861]: I0310 20:32:01.015065 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:32:01 crc kubenswrapper[4861]: I0310 20:32:01.173695 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" event={"ID":"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed","Type":"ContainerStarted","Data":"6ad1efbafea9dfcbd781cca574b7b66284ceeef963556a9e5c509d71618c724f"} Mar 10 20:32:03 crc kubenswrapper[4861]: I0310 20:32:03.193182 4861 generic.go:334] "Generic (PLEG): container finished" podID="6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" containerID="2c90b2cac7bfd4de928a6047890b389db67ce0d666e7d44940cb9edd34bb4dde" exitCode=0 Mar 10 20:32:03 crc kubenswrapper[4861]: I0310 20:32:03.193241 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" event={"ID":"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed","Type":"ContainerDied","Data":"2c90b2cac7bfd4de928a6047890b389db67ce0d666e7d44940cb9edd34bb4dde"} Mar 10 20:32:04 crc kubenswrapper[4861]: I0310 20:32:04.589124 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:04 crc kubenswrapper[4861]: I0310 20:32:04.620256 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4fbb\" (UniqueName: \"kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb\") pod \"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed\" (UID: \"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed\") " Mar 10 20:32:04 crc kubenswrapper[4861]: I0310 20:32:04.633985 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb" (OuterVolumeSpecName: "kube-api-access-r4fbb") pod "6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" (UID: "6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed"). InnerVolumeSpecName "kube-api-access-r4fbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:32:04 crc kubenswrapper[4861]: I0310 20:32:04.723853 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4fbb\" (UniqueName: \"kubernetes.io/projected/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed-kube-api-access-r4fbb\") on node \"crc\" DevicePath \"\"" Mar 10 20:32:05 crc kubenswrapper[4861]: I0310 20:32:05.217342 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" event={"ID":"6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed","Type":"ContainerDied","Data":"6ad1efbafea9dfcbd781cca574b7b66284ceeef963556a9e5c509d71618c724f"} Mar 10 20:32:05 crc kubenswrapper[4861]: I0310 20:32:05.217400 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad1efbafea9dfcbd781cca574b7b66284ceeef963556a9e5c509d71618c724f" Mar 10 20:32:05 crc kubenswrapper[4861]: I0310 20:32:05.217459 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552912-pvjdw" Mar 10 20:32:05 crc kubenswrapper[4861]: I0310 20:32:05.684931 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552906-flk57"] Mar 10 20:32:05 crc kubenswrapper[4861]: I0310 20:32:05.713136 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552906-flk57"] Mar 10 20:32:06 crc kubenswrapper[4861]: I0310 20:32:06.976892 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da" path="/var/lib/kubelet/pods/50e0baaa-1c8a-4ccd-a6f5-0471c0f6a0da/volumes" Mar 10 20:32:09 crc kubenswrapper[4861]: I0310 20:32:09.959688 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:32:09 crc kubenswrapper[4861]: E0310 20:32:09.960406 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:32:24 crc kubenswrapper[4861]: I0310 20:32:24.958064 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:32:24 crc kubenswrapper[4861]: E0310 20:32:24.959135 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:32:39 crc kubenswrapper[4861]: I0310 20:32:39.958949 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:32:39 crc kubenswrapper[4861]: E0310 20:32:39.960311 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:32:52 crc kubenswrapper[4861]: I0310 20:32:52.959780 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:32:52 crc kubenswrapper[4861]: E0310 20:32:52.960879 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:32:59 crc kubenswrapper[4861]: I0310 20:32:59.630424 4861 scope.go:117] "RemoveContainer" containerID="6e13fa0b0dd807a1f5242aaaa231e10cac3fdfbab68ae5835294f794ca819ebf" Mar 10 20:33:07 crc kubenswrapper[4861]: I0310 20:33:07.958538 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:33:07 crc kubenswrapper[4861]: E0310 20:33:07.959581 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:33:21 crc kubenswrapper[4861]: I0310 20:33:21.959186 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:33:21 crc kubenswrapper[4861]: E0310 20:33:21.960387 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:33:36 crc kubenswrapper[4861]: I0310 20:33:36.985606 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:33:36 crc kubenswrapper[4861]: E0310 20:33:36.986889 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:33:48 crc kubenswrapper[4861]: I0310 20:33:48.959476 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:33:48 crc kubenswrapper[4861]: E0310 20:33:48.960663 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.172369 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552914-jd5mv"] Mar 10 20:34:00 crc kubenswrapper[4861]: E0310 20:34:00.173684 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" containerName="oc" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.173734 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" containerName="oc" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.174123 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" containerName="oc" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.175107 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.179075 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.179429 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.179656 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.184252 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552914-jd5mv"] Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.302798 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msc9k\" (UniqueName: \"kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k\") pod \"auto-csr-approver-29552914-jd5mv\" (UID: \"8f4d3afc-0bde-4e92-99bf-7231448cfa1a\") " pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.360044 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.362061 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.369521 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.404660 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msc9k\" (UniqueName: \"kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k\") pod \"auto-csr-approver-29552914-jd5mv\" (UID: \"8f4d3afc-0bde-4e92-99bf-7231448cfa1a\") " pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.433473 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msc9k\" (UniqueName: \"kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k\") pod \"auto-csr-approver-29552914-jd5mv\" (UID: \"8f4d3afc-0bde-4e92-99bf-7231448cfa1a\") " pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.506129 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.506188 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.506260 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzf6v\" (UniqueName: \"kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.521098 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.608550 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.608592 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.608643 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzf6v\" (UniqueName: \"kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.609143 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.609299 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.637243 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzf6v\" (UniqueName: \"kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v\") pod \"redhat-operators-6gpm9\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.685052 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:00 crc kubenswrapper[4861]: I0310 20:34:00.935157 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:01 crc kubenswrapper[4861]: I0310 20:34:01.039689 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552914-jd5mv"] Mar 10 20:34:01 crc kubenswrapper[4861]: I0310 20:34:01.438321 4861 generic.go:334] "Generic (PLEG): container finished" podID="74b76edd-b661-4e94-a7fb-ead0347da570" containerID="908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e" exitCode=0 Mar 10 20:34:01 crc kubenswrapper[4861]: I0310 20:34:01.438384 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerDied","Data":"908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e"} Mar 10 20:34:01 crc kubenswrapper[4861]: I0310 20:34:01.438414 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerStarted","Data":"b34d736f4c8ec0417ec779c5b6c1f797f71f0bd29911e8c60478a5b31e03016e"} Mar 10 20:34:01 crc kubenswrapper[4861]: I0310 20:34:01.441874 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" event={"ID":"8f4d3afc-0bde-4e92-99bf-7231448cfa1a","Type":"ContainerStarted","Data":"dad0b20a3fe3bfc35f884b10188731875df832f671ab23d981ad95e8bf041ee9"} Mar 10 20:34:02 crc kubenswrapper[4861]: I0310 20:34:02.465750 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerStarted","Data":"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d"} Mar 10 20:34:03 crc kubenswrapper[4861]: I0310 20:34:03.481967 4861 generic.go:334] "Generic (PLEG): container finished" podID="74b76edd-b661-4e94-a7fb-ead0347da570" containerID="91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d" exitCode=0 Mar 10 20:34:03 crc kubenswrapper[4861]: I0310 20:34:03.482017 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerDied","Data":"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d"} Mar 10 20:34:03 crc kubenswrapper[4861]: I0310 20:34:03.486127 4861 generic.go:334] "Generic (PLEG): container finished" podID="8f4d3afc-0bde-4e92-99bf-7231448cfa1a" containerID="99cb18c8ebff5d1a0be9c3976401477c433092e4c63f60138de888fcd281add1" exitCode=0 Mar 10 20:34:03 crc kubenswrapper[4861]: I0310 20:34:03.486168 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" event={"ID":"8f4d3afc-0bde-4e92-99bf-7231448cfa1a","Type":"ContainerDied","Data":"99cb18c8ebff5d1a0be9c3976401477c433092e4c63f60138de888fcd281add1"} Mar 10 20:34:03 crc kubenswrapper[4861]: I0310 20:34:03.958879 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:34:03 crc kubenswrapper[4861]: E0310 20:34:03.959282 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.494916 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerStarted","Data":"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e"} Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.515163 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gpm9" podStartSLOduration=1.964178325 podStartE2EDuration="4.515147773s" podCreationTimestamp="2026-03-10 20:34:00 +0000 UTC" firstStartedPulling="2026-03-10 20:34:01.440068535 +0000 UTC m=+6385.203504505" lastFinishedPulling="2026-03-10 20:34:03.991037983 +0000 UTC m=+6387.754473953" observedRunningTime="2026-03-10 20:34:04.510960259 +0000 UTC m=+6388.274396229" watchObservedRunningTime="2026-03-10 20:34:04.515147773 +0000 UTC m=+6388.278583723" Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.823555 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.891959 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msc9k\" (UniqueName: \"kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k\") pod \"8f4d3afc-0bde-4e92-99bf-7231448cfa1a\" (UID: \"8f4d3afc-0bde-4e92-99bf-7231448cfa1a\") " Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.902949 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k" (OuterVolumeSpecName: "kube-api-access-msc9k") pod "8f4d3afc-0bde-4e92-99bf-7231448cfa1a" (UID: "8f4d3afc-0bde-4e92-99bf-7231448cfa1a"). InnerVolumeSpecName "kube-api-access-msc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:34:04 crc kubenswrapper[4861]: I0310 20:34:04.994470 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msc9k\" (UniqueName: \"kubernetes.io/projected/8f4d3afc-0bde-4e92-99bf-7231448cfa1a-kube-api-access-msc9k\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.507010 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" event={"ID":"8f4d3afc-0bde-4e92-99bf-7231448cfa1a","Type":"ContainerDied","Data":"dad0b20a3fe3bfc35f884b10188731875df832f671ab23d981ad95e8bf041ee9"} Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.507041 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552914-jd5mv" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.507064 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad0b20a3fe3bfc35f884b10188731875df832f671ab23d981ad95e8bf041ee9" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.760822 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:05 crc kubenswrapper[4861]: E0310 20:34:05.761239 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4d3afc-0bde-4e92-99bf-7231448cfa1a" containerName="oc" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.761256 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4d3afc-0bde-4e92-99bf-7231448cfa1a" containerName="oc" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.761475 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4d3afc-0bde-4e92-99bf-7231448cfa1a" containerName="oc" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.763165 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.789240 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.902782 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552908-2b8bk"] Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.911275 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz4n2\" (UniqueName: \"kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.911566 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.911759 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:05 crc kubenswrapper[4861]: I0310 20:34:05.916425 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552908-2b8bk"] Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.013826 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz4n2\" (UniqueName: \"kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.014271 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.014320 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.014898 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.014967 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.037056 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz4n2\" (UniqueName: \"kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2\") pod \"redhat-marketplace-vcpm7\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.083689 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.537482 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:06 crc kubenswrapper[4861]: W0310 20:34:06.543493 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0e2fb7_58d6_4f34_b0c1_30456ba1d73e.slice/crio-51039013755c1544c1269f9edb929bdc0c71de27c9651ba5d17f68033c0875a2 WatchSource:0}: Error finding container 51039013755c1544c1269f9edb929bdc0c71de27c9651ba5d17f68033c0875a2: Status 404 returned error can't find the container with id 51039013755c1544c1269f9edb929bdc0c71de27c9651ba5d17f68033c0875a2 Mar 10 20:34:06 crc kubenswrapper[4861]: I0310 20:34:06.987618 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b6f18c-ee9d-459b-9f33-5301af9465f3" path="/var/lib/kubelet/pods/09b6f18c-ee9d-459b-9f33-5301af9465f3/volumes" Mar 10 20:34:07 crc kubenswrapper[4861]: I0310 20:34:07.524301 4861 generic.go:334] "Generic (PLEG): container finished" podID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerID="b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f" exitCode=0 Mar 10 20:34:07 crc kubenswrapper[4861]: I0310 20:34:07.524599 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerDied","Data":"b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f"} Mar 10 20:34:07 crc kubenswrapper[4861]: I0310 20:34:07.524640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerStarted","Data":"51039013755c1544c1269f9edb929bdc0c71de27c9651ba5d17f68033c0875a2"} Mar 10 20:34:09 crc kubenswrapper[4861]: I0310 20:34:09.546736 4861 generic.go:334] "Generic (PLEG): container finished" podID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerID="c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095" exitCode=0 Mar 10 20:34:09 crc kubenswrapper[4861]: I0310 20:34:09.546825 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerDied","Data":"c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095"} Mar 10 20:34:10 crc kubenswrapper[4861]: I0310 20:34:10.561731 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerStarted","Data":"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702"} Mar 10 20:34:10 crc kubenswrapper[4861]: I0310 20:34:10.590360 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vcpm7" podStartSLOduration=3.044594728 podStartE2EDuration="5.590335944s" podCreationTimestamp="2026-03-10 20:34:05 +0000 UTC" firstStartedPulling="2026-03-10 20:34:07.526956053 +0000 UTC m=+6391.290392023" lastFinishedPulling="2026-03-10 20:34:10.072697269 +0000 UTC m=+6393.836133239" observedRunningTime="2026-03-10 20:34:10.586067599 +0000 UTC m=+6394.349503569" watchObservedRunningTime="2026-03-10 20:34:10.590335944 +0000 UTC m=+6394.353771904" Mar 10 20:34:10 crc kubenswrapper[4861]: I0310 20:34:10.685260 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:10 crc kubenswrapper[4861]: I0310 20:34:10.686219 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:11 crc kubenswrapper[4861]: I0310 20:34:11.760452 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6gpm9" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="registry-server" probeResult="failure" output=< Mar 10 20:34:11 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 20:34:11 crc kubenswrapper[4861]: > Mar 10 20:34:15 crc kubenswrapper[4861]: I0310 20:34:15.958808 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:34:15 crc kubenswrapper[4861]: E0310 20:34:15.961299 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:34:16 crc kubenswrapper[4861]: I0310 20:34:16.083951 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:16 crc kubenswrapper[4861]: I0310 20:34:16.084026 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:16 crc kubenswrapper[4861]: I0310 20:34:16.156549 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:16 crc kubenswrapper[4861]: I0310 20:34:16.702504 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:16 crc kubenswrapper[4861]: I0310 20:34:16.774978 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:18 crc kubenswrapper[4861]: I0310 20:34:18.944327 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vcpm7" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="registry-server" containerID="cri-o://06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702" gracePeriod=2 Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.395058 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.549449 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz4n2\" (UniqueName: \"kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2\") pod \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.549607 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content\") pod \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.549663 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities\") pod \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\" (UID: \"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e\") " Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.550804 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities" (OuterVolumeSpecName: "utilities") pod "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" (UID: "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.565505 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2" (OuterVolumeSpecName: "kube-api-access-cz4n2") pod "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" (UID: "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e"). InnerVolumeSpecName "kube-api-access-cz4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.600686 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" (UID: "7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.653392 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.653509 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.653532 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz4n2\" (UniqueName: \"kubernetes.io/projected/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e-kube-api-access-cz4n2\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.957664 4861 generic.go:334] "Generic (PLEG): container finished" podID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerID="06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702" exitCode=0 Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.957726 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerDied","Data":"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702"} Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.957757 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcpm7" event={"ID":"7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e","Type":"ContainerDied","Data":"51039013755c1544c1269f9edb929bdc0c71de27c9651ba5d17f68033c0875a2"} Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.957777 4861 scope.go:117] "RemoveContainer" containerID="06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702" Mar 10 20:34:19 crc kubenswrapper[4861]: I0310 20:34:19.957915 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcpm7" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.000251 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.002854 4861 scope.go:117] "RemoveContainer" containerID="c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.009567 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcpm7"] Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.029051 4861 scope.go:117] "RemoveContainer" containerID="b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.107957 4861 scope.go:117] "RemoveContainer" containerID="06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702" Mar 10 20:34:20 crc kubenswrapper[4861]: E0310 20:34:20.108462 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702\": container with ID starting with 06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702 not found: ID does not exist" containerID="06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.108530 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702"} err="failed to get container status \"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702\": rpc error: code = NotFound desc = could not find container \"06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702\": container with ID starting with 06a978e87f35df1da1f04937f4974d97cdaf27adefdbed9f5462b4634a76c702 not found: ID does not exist" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.108573 4861 scope.go:117] "RemoveContainer" containerID="c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095" Mar 10 20:34:20 crc kubenswrapper[4861]: E0310 20:34:20.109120 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095\": container with ID starting with c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095 not found: ID does not exist" containerID="c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.109175 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095"} err="failed to get container status \"c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095\": rpc error: code = NotFound desc = could not find container \"c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095\": container with ID starting with c439b95d90b071d3eda5ea736c9adeb7131d8abc8fca8fd0247319ad4f658095 not found: ID does not exist" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.109212 4861 scope.go:117] "RemoveContainer" containerID="b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f" Mar 10 20:34:20 crc kubenswrapper[4861]: E0310 20:34:20.109659 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f\": container with ID starting with b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f not found: ID does not exist" containerID="b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.109682 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f"} err="failed to get container status \"b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f\": rpc error: code = NotFound desc = could not find container \"b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f\": container with ID starting with b260350c3baa0410a1fd51bbd76715dcb78a9b67fae8d760df729b79e91b895f not found: ID does not exist" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.745076 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.835230 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:20 crc kubenswrapper[4861]: I0310 20:34:20.971036 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" path="/var/lib/kubelet/pods/7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e/volumes" Mar 10 20:34:25 crc kubenswrapper[4861]: I0310 20:34:25.403138 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:25 crc kubenswrapper[4861]: I0310 20:34:25.403784 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6gpm9" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="registry-server" containerID="cri-o://16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e" gracePeriod=2 Mar 10 20:34:25 crc kubenswrapper[4861]: I0310 20:34:25.922968 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.033965 4861 generic.go:334] "Generic (PLEG): container finished" podID="74b76edd-b661-4e94-a7fb-ead0347da570" containerID="16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e" exitCode=0 Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.034003 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerDied","Data":"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e"} Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.034027 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gpm9" event={"ID":"74b76edd-b661-4e94-a7fb-ead0347da570","Type":"ContainerDied","Data":"b34d736f4c8ec0417ec779c5b6c1f797f71f0bd29911e8c60478a5b31e03016e"} Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.034042 4861 scope.go:117] "RemoveContainer" containerID="16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.034173 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gpm9" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.067333 4861 scope.go:117] "RemoveContainer" containerID="91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.085068 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content\") pod \"74b76edd-b661-4e94-a7fb-ead0347da570\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.085136 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities\") pod \"74b76edd-b661-4e94-a7fb-ead0347da570\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.085228 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzf6v\" (UniqueName: \"kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v\") pod \"74b76edd-b661-4e94-a7fb-ead0347da570\" (UID: \"74b76edd-b661-4e94-a7fb-ead0347da570\") " Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.087841 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities" (OuterVolumeSpecName: "utilities") pod "74b76edd-b661-4e94-a7fb-ead0347da570" (UID: "74b76edd-b661-4e94-a7fb-ead0347da570"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.094404 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v" (OuterVolumeSpecName: "kube-api-access-fzf6v") pod "74b76edd-b661-4e94-a7fb-ead0347da570" (UID: "74b76edd-b661-4e94-a7fb-ead0347da570"). InnerVolumeSpecName "kube-api-access-fzf6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.099189 4861 scope.go:117] "RemoveContainer" containerID="908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.171424 4861 scope.go:117] "RemoveContainer" containerID="16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e" Mar 10 20:34:26 crc kubenswrapper[4861]: E0310 20:34:26.171984 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e\": container with ID starting with 16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e not found: ID does not exist" containerID="16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.172049 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e"} err="failed to get container status \"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e\": rpc error: code = NotFound desc = could not find container \"16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e\": container with ID starting with 16ec3a4bc53e958a9988ed816c731ed48ef041490550a363060689bc2e29b48e not found: ID does not exist" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.172104 4861 scope.go:117] "RemoveContainer" containerID="91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d" Mar 10 20:34:26 crc kubenswrapper[4861]: E0310 20:34:26.172894 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d\": container with ID starting with 91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d not found: ID does not exist" containerID="91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.172955 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d"} err="failed to get container status \"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d\": rpc error: code = NotFound desc = could not find container \"91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d\": container with ID starting with 91c51b1eb9c646bc46eb4edd8bf0f42fd4c5e54273fc99f9f1f3dd95a7a25b9d not found: ID does not exist" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.172994 4861 scope.go:117] "RemoveContainer" containerID="908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e" Mar 10 20:34:26 crc kubenswrapper[4861]: E0310 20:34:26.173667 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e\": container with ID starting with 908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e not found: ID does not exist" containerID="908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.173739 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e"} err="failed to get container status \"908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e\": rpc error: code = NotFound desc = could not find container \"908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e\": container with ID starting with 908c05fcd781c2206c0b844ce8f624902514cf24e700f8522a619724dbc8569e not found: ID does not exist" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.187339 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzf6v\" (UniqueName: \"kubernetes.io/projected/74b76edd-b661-4e94-a7fb-ead0347da570-kube-api-access-fzf6v\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.187388 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.232424 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74b76edd-b661-4e94-a7fb-ead0347da570" (UID: "74b76edd-b661-4e94-a7fb-ead0347da570"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.291433 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b76edd-b661-4e94-a7fb-ead0347da570-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.389207 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.401016 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6gpm9"] Mar 10 20:34:26 crc kubenswrapper[4861]: I0310 20:34:26.976349 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" path="/var/lib/kubelet/pods/74b76edd-b661-4e94-a7fb-ead0347da570/volumes" Mar 10 20:34:27 crc kubenswrapper[4861]: I0310 20:34:27.958856 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:34:29 crc kubenswrapper[4861]: I0310 20:34:29.071660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d"} Mar 10 20:34:59 crc kubenswrapper[4861]: I0310 20:34:59.745292 4861 scope.go:117] "RemoveContainer" containerID="ddda13501c00da04e7a25a72a8f01c1e03373db293000cfd65250c2cbc476072" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.164175 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552916-x8lgz"] Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165606 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="extract-utilities" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165636 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="extract-utilities" Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165654 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165669 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165692 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165705 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165772 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="extract-utilities" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165784 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="extract-utilities" Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165804 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="extract-content" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165817 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="extract-content" Mar 10 20:36:00 crc kubenswrapper[4861]: E0310 20:36:00.165843 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="extract-content" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.165855 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="extract-content" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.166227 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e2fb7-58d6-4f34-b0c1-30456ba1d73e" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.166258 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b76edd-b661-4e94-a7fb-ead0347da570" containerName="registry-server" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.167301 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.171238 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.171355 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.171774 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.179942 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552916-x8lgz"] Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.331123 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445fd\" (UniqueName: \"kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd\") pod \"auto-csr-approver-29552916-x8lgz\" (UID: \"e66623e4-8467-42a1-bd66-cbd8a720deb2\") " pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.433135 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-445fd\" (UniqueName: \"kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd\") pod \"auto-csr-approver-29552916-x8lgz\" (UID: \"e66623e4-8467-42a1-bd66-cbd8a720deb2\") " pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.467065 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-445fd\" (UniqueName: \"kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd\") pod \"auto-csr-approver-29552916-x8lgz\" (UID: \"e66623e4-8467-42a1-bd66-cbd8a720deb2\") " pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:00 crc kubenswrapper[4861]: I0310 20:36:00.500427 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:01 crc kubenswrapper[4861]: W0310 20:36:01.079688 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66623e4_8467_42a1_bd66_cbd8a720deb2.slice/crio-463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f WatchSource:0}: Error finding container 463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f: Status 404 returned error can't find the container with id 463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f Mar 10 20:36:01 crc kubenswrapper[4861]: I0310 20:36:01.083739 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552916-x8lgz"] Mar 10 20:36:02 crc kubenswrapper[4861]: I0310 20:36:02.013098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" event={"ID":"e66623e4-8467-42a1-bd66-cbd8a720deb2","Type":"ContainerStarted","Data":"463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f"} Mar 10 20:36:03 crc kubenswrapper[4861]: I0310 20:36:03.025515 4861 generic.go:334] "Generic (PLEG): container finished" podID="e66623e4-8467-42a1-bd66-cbd8a720deb2" containerID="e886cbe3a5d71a95f13e689cadf7cdb6267e12b8a669007ca25923b5984e4fec" exitCode=0 Mar 10 20:36:03 crc kubenswrapper[4861]: I0310 20:36:03.025602 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" event={"ID":"e66623e4-8467-42a1-bd66-cbd8a720deb2","Type":"ContainerDied","Data":"e886cbe3a5d71a95f13e689cadf7cdb6267e12b8a669007ca25923b5984e4fec"} Mar 10 20:36:04 crc kubenswrapper[4861]: I0310 20:36:04.441743 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:04 crc kubenswrapper[4861]: I0310 20:36:04.625655 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-445fd\" (UniqueName: \"kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd\") pod \"e66623e4-8467-42a1-bd66-cbd8a720deb2\" (UID: \"e66623e4-8467-42a1-bd66-cbd8a720deb2\") " Mar 10 20:36:04 crc kubenswrapper[4861]: I0310 20:36:04.635043 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd" (OuterVolumeSpecName: "kube-api-access-445fd") pod "e66623e4-8467-42a1-bd66-cbd8a720deb2" (UID: "e66623e4-8467-42a1-bd66-cbd8a720deb2"). InnerVolumeSpecName "kube-api-access-445fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:36:04 crc kubenswrapper[4861]: I0310 20:36:04.728381 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-445fd\" (UniqueName: \"kubernetes.io/projected/e66623e4-8467-42a1-bd66-cbd8a720deb2-kube-api-access-445fd\") on node \"crc\" DevicePath \"\"" Mar 10 20:36:05 crc kubenswrapper[4861]: I0310 20:36:05.049508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" event={"ID":"e66623e4-8467-42a1-bd66-cbd8a720deb2","Type":"ContainerDied","Data":"463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f"} Mar 10 20:36:05 crc kubenswrapper[4861]: I0310 20:36:05.049581 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463a24b764aabfb59181fcef4043310a1a3145cee884c90a0201f5230062a48f" Mar 10 20:36:05 crc kubenswrapper[4861]: I0310 20:36:05.049668 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552916-x8lgz" Mar 10 20:36:05 crc kubenswrapper[4861]: I0310 20:36:05.549374 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552910-gds55"] Mar 10 20:36:05 crc kubenswrapper[4861]: I0310 20:36:05.564780 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552910-gds55"] Mar 10 20:36:06 crc kubenswrapper[4861]: I0310 20:36:06.968810 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683b2c94-abc2-4a99-961a-27f6a66761c2" path="/var/lib/kubelet/pods/683b2c94-abc2-4a99-961a-27f6a66761c2/volumes" Mar 10 20:36:51 crc kubenswrapper[4861]: I0310 20:36:51.992031 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:36:51 crc kubenswrapper[4861]: I0310 20:36:51.992901 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:36:59 crc kubenswrapper[4861]: I0310 20:36:59.936237 4861 scope.go:117] "RemoveContainer" containerID="d386fb604c754eaee97af40ae773c25f0bf92612f8c3d2d8298c8b80a6bf1bf7" Mar 10 20:37:21 crc kubenswrapper[4861]: I0310 20:37:21.992354 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:37:21 crc kubenswrapper[4861]: I0310 20:37:21.993000 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.793395 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:37:47 crc kubenswrapper[4861]: E0310 20:37:47.802403 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66623e4-8467-42a1-bd66-cbd8a720deb2" containerName="oc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.802445 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66623e4-8467-42a1-bd66-cbd8a720deb2" containerName="oc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.802645 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66623e4-8467-42a1-bd66-cbd8a720deb2" containerName="oc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.804169 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.812994 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.920469 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.921000 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzfw\" (UniqueName: \"kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:47 crc kubenswrapper[4861]: I0310 20:37:47.921375 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.023504 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzfw\" (UniqueName: \"kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.023650 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.023741 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.024401 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.024576 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.046413 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzfw\" (UniqueName: \"kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw\") pod \"certified-operators-96dlc\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.178164 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:48 crc kubenswrapper[4861]: I0310 20:37:48.680454 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:37:49 crc kubenswrapper[4861]: I0310 20:37:49.150648 4861 generic.go:334] "Generic (PLEG): container finished" podID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerID="685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420" exitCode=0 Mar 10 20:37:49 crc kubenswrapper[4861]: I0310 20:37:49.150758 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerDied","Data":"685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420"} Mar 10 20:37:49 crc kubenswrapper[4861]: I0310 20:37:49.150947 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerStarted","Data":"87a0e11a0f308b09293d8ada0e6ab008e3652115d4315bfb4aab994882af69f1"} Mar 10 20:37:49 crc kubenswrapper[4861]: I0310 20:37:49.153074 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:37:50 crc kubenswrapper[4861]: I0310 20:37:50.163955 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerStarted","Data":"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2"} Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.178895 4861 generic.go:334] "Generic (PLEG): container finished" podID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerID="1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2" exitCode=0 Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.178970 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerDied","Data":"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2"} Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.991644 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.991922 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.991960 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.992742 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:37:51 crc kubenswrapper[4861]: I0310 20:37:51.992803 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d" gracePeriod=600 Mar 10 20:37:52 crc kubenswrapper[4861]: I0310 20:37:52.192254 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerStarted","Data":"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c"} Mar 10 20:37:52 crc kubenswrapper[4861]: I0310 20:37:52.196177 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d" exitCode=0 Mar 10 20:37:52 crc kubenswrapper[4861]: I0310 20:37:52.196207 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d"} Mar 10 20:37:52 crc kubenswrapper[4861]: I0310 20:37:52.196229 4861 scope.go:117] "RemoveContainer" containerID="30617e1722c7582f37ad57fcb8d1dc45aba4f055419f9b178dab9e2fdf629c84" Mar 10 20:37:52 crc kubenswrapper[4861]: I0310 20:37:52.224889 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96dlc" podStartSLOduration=2.6264826919999997 podStartE2EDuration="5.224868399s" podCreationTimestamp="2026-03-10 20:37:47 +0000 UTC" firstStartedPulling="2026-03-10 20:37:49.152591388 +0000 UTC m=+6612.916027388" lastFinishedPulling="2026-03-10 20:37:51.750977135 +0000 UTC m=+6615.514413095" observedRunningTime="2026-03-10 20:37:52.217317796 +0000 UTC m=+6615.980753796" watchObservedRunningTime="2026-03-10 20:37:52.224868399 +0000 UTC m=+6615.988304359" Mar 10 20:37:53 crc kubenswrapper[4861]: I0310 20:37:53.208552 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4"} Mar 10 20:37:58 crc kubenswrapper[4861]: I0310 20:37:58.178411 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:58 crc kubenswrapper[4861]: I0310 20:37:58.179191 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:58 crc kubenswrapper[4861]: I0310 20:37:58.257097 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:58 crc kubenswrapper[4861]: I0310 20:37:58.334394 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:37:58 crc kubenswrapper[4861]: I0310 20:37:58.507639 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.161441 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552918-c456h"] Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.163190 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.166825 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.166832 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.167152 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.177696 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552918-c456h"] Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.275672 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96dlc" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="registry-server" containerID="cri-o://d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c" gracePeriod=2 Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.309164 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhfp\" (UniqueName: \"kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp\") pod \"auto-csr-approver-29552918-c456h\" (UID: \"9352dac9-5c21-4377-a25e-7f1b74dc48bd\") " pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.411143 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhfp\" (UniqueName: \"kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp\") pod \"auto-csr-approver-29552918-c456h\" (UID: \"9352dac9-5c21-4377-a25e-7f1b74dc48bd\") " pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.444650 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhfp\" (UniqueName: \"kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp\") pod \"auto-csr-approver-29552918-c456h\" (UID: \"9352dac9-5c21-4377-a25e-7f1b74dc48bd\") " pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.492960 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:00 crc kubenswrapper[4861]: W0310 20:38:00.811611 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9352dac9_5c21_4377_a25e_7f1b74dc48bd.slice/crio-90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424 WatchSource:0}: Error finding container 90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424: Status 404 returned error can't find the container with id 90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424 Mar 10 20:38:00 crc kubenswrapper[4861]: I0310 20:38:00.814307 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552918-c456h"] Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.195771 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.288360 4861 generic.go:334] "Generic (PLEG): container finished" podID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerID="d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c" exitCode=0 Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.288859 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerDied","Data":"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c"} Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.288890 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96dlc" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.289152 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96dlc" event={"ID":"f5ae6fc9-6cc7-40e8-a028-33d067e5f092","Type":"ContainerDied","Data":"87a0e11a0f308b09293d8ada0e6ab008e3652115d4315bfb4aab994882af69f1"} Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.289210 4861 scope.go:117] "RemoveContainer" containerID="d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.290815 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552918-c456h" event={"ID":"9352dac9-5c21-4377-a25e-7f1b74dc48bd","Type":"ContainerStarted","Data":"90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424"} Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.307217 4861 scope.go:117] "RemoveContainer" containerID="1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.326922 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content\") pod \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.326988 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities\") pod \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.327080 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvzfw\" (UniqueName: \"kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw\") pod \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\" (UID: \"f5ae6fc9-6cc7-40e8-a028-33d067e5f092\") " Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.327934 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities" (OuterVolumeSpecName: "utilities") pod "f5ae6fc9-6cc7-40e8-a028-33d067e5f092" (UID: "f5ae6fc9-6cc7-40e8-a028-33d067e5f092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.328318 4861 scope.go:117] "RemoveContainer" containerID="685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.336482 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw" (OuterVolumeSpecName: "kube-api-access-fvzfw") pod "f5ae6fc9-6cc7-40e8-a028-33d067e5f092" (UID: "f5ae6fc9-6cc7-40e8-a028-33d067e5f092"). InnerVolumeSpecName "kube-api-access-fvzfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.359723 4861 scope.go:117] "RemoveContainer" containerID="d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c" Mar 10 20:38:01 crc kubenswrapper[4861]: E0310 20:38:01.360204 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c\": container with ID starting with d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c not found: ID does not exist" containerID="d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.360245 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c"} err="failed to get container status \"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c\": rpc error: code = NotFound desc = could not find container \"d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c\": container with ID starting with d65fa9cdc84a950894675b03f733b0ecdad23b2cabc7868c9361ce94444ae76c not found: ID does not exist" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.360274 4861 scope.go:117] "RemoveContainer" containerID="1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2" Mar 10 20:38:01 crc kubenswrapper[4861]: E0310 20:38:01.360849 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2\": container with ID starting with 1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2 not found: ID does not exist" containerID="1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.360891 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2"} err="failed to get container status \"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2\": rpc error: code = NotFound desc = could not find container \"1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2\": container with ID starting with 1f0c42aa268add11b6e1b44b2823454dd9313093e261f2cbfff8e74bb93d81d2 not found: ID does not exist" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.360941 4861 scope.go:117] "RemoveContainer" containerID="685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420" Mar 10 20:38:01 crc kubenswrapper[4861]: E0310 20:38:01.361384 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420\": container with ID starting with 685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420 not found: ID does not exist" containerID="685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.361410 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420"} err="failed to get container status \"685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420\": rpc error: code = NotFound desc = could not find container \"685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420\": container with ID starting with 685b06e1622da1bcc854442bec3095a8ca80034a9037dddeb3eb729702e19420 not found: ID does not exist" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.418228 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ae6fc9-6cc7-40e8-a028-33d067e5f092" (UID: "f5ae6fc9-6cc7-40e8-a028-33d067e5f092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.428951 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.428993 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.429009 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvzfw\" (UniqueName: \"kubernetes.io/projected/f5ae6fc9-6cc7-40e8-a028-33d067e5f092-kube-api-access-fvzfw\") on node \"crc\" DevicePath \"\"" Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.653783 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:38:01 crc kubenswrapper[4861]: I0310 20:38:01.666365 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96dlc"] Mar 10 20:38:02 crc kubenswrapper[4861]: I0310 20:38:02.305070 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552918-c456h" event={"ID":"9352dac9-5c21-4377-a25e-7f1b74dc48bd","Type":"ContainerStarted","Data":"c8debb7d03681141f99aaaa1ffb9353dc4bf5566700cea157eac79abfec6797c"} Mar 10 20:38:02 crc kubenswrapper[4861]: I0310 20:38:02.322361 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29552918-c456h" podStartSLOduration=1.398628042 podStartE2EDuration="2.322334202s" podCreationTimestamp="2026-03-10 20:38:00 +0000 UTC" firstStartedPulling="2026-03-10 20:38:00.813765244 +0000 UTC m=+6624.577201204" lastFinishedPulling="2026-03-10 20:38:01.737471364 +0000 UTC m=+6625.500907364" observedRunningTime="2026-03-10 20:38:02.321110398 +0000 UTC m=+6626.084546378" watchObservedRunningTime="2026-03-10 20:38:02.322334202 +0000 UTC m=+6626.085770192" Mar 10 20:38:02 crc kubenswrapper[4861]: I0310 20:38:02.974892 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" path="/var/lib/kubelet/pods/f5ae6fc9-6cc7-40e8-a028-33d067e5f092/volumes" Mar 10 20:38:03 crc kubenswrapper[4861]: I0310 20:38:03.320992 4861 generic.go:334] "Generic (PLEG): container finished" podID="9352dac9-5c21-4377-a25e-7f1b74dc48bd" containerID="c8debb7d03681141f99aaaa1ffb9353dc4bf5566700cea157eac79abfec6797c" exitCode=0 Mar 10 20:38:03 crc kubenswrapper[4861]: I0310 20:38:03.321089 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552918-c456h" event={"ID":"9352dac9-5c21-4377-a25e-7f1b74dc48bd","Type":"ContainerDied","Data":"c8debb7d03681141f99aaaa1ffb9353dc4bf5566700cea157eac79abfec6797c"} Mar 10 20:38:04 crc kubenswrapper[4861]: I0310 20:38:04.762712 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:04 crc kubenswrapper[4861]: I0310 20:38:04.929012 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhfp\" (UniqueName: \"kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp\") pod \"9352dac9-5c21-4377-a25e-7f1b74dc48bd\" (UID: \"9352dac9-5c21-4377-a25e-7f1b74dc48bd\") " Mar 10 20:38:04 crc kubenswrapper[4861]: I0310 20:38:04.938232 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp" (OuterVolumeSpecName: "kube-api-access-pqhfp") pod "9352dac9-5c21-4377-a25e-7f1b74dc48bd" (UID: "9352dac9-5c21-4377-a25e-7f1b74dc48bd"). InnerVolumeSpecName "kube-api-access-pqhfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.032062 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhfp\" (UniqueName: \"kubernetes.io/projected/9352dac9-5c21-4377-a25e-7f1b74dc48bd-kube-api-access-pqhfp\") on node \"crc\" DevicePath \"\"" Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.356154 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552918-c456h" event={"ID":"9352dac9-5c21-4377-a25e-7f1b74dc48bd","Type":"ContainerDied","Data":"90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424"} Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.356219 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552918-c456h" Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.356239 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90883d7dd5feef8d5ae349c3d7030e483ff63a719b368dd80760d3a51fdfc424" Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.419833 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552912-pvjdw"] Mar 10 20:38:05 crc kubenswrapper[4861]: I0310 20:38:05.433672 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552912-pvjdw"] Mar 10 20:38:06 crc kubenswrapper[4861]: I0310 20:38:06.976608 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed" path="/var/lib/kubelet/pods/6e4c8acb-7b48-4c67-aa41-0c3cf6f071ed/volumes" Mar 10 20:39:00 crc kubenswrapper[4861]: I0310 20:39:00.080541 4861 scope.go:117] "RemoveContainer" containerID="2c90b2cac7bfd4de928a6047890b389db67ce0d666e7d44940cb9edd34bb4dde" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.163322 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552920-69pcx"] Mar 10 20:40:00 crc kubenswrapper[4861]: E0310 20:40:00.164498 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="registry-server" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.164520 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="registry-server" Mar 10 20:40:00 crc kubenswrapper[4861]: E0310 20:40:00.164555 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="extract-utilities" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.164570 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="extract-utilities" Mar 10 20:40:00 crc kubenswrapper[4861]: E0310 20:40:00.164602 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="extract-content" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.164615 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="extract-content" Mar 10 20:40:00 crc kubenswrapper[4861]: E0310 20:40:00.164641 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352dac9-5c21-4377-a25e-7f1b74dc48bd" containerName="oc" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.164652 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352dac9-5c21-4377-a25e-7f1b74dc48bd" containerName="oc" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.165024 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="9352dac9-5c21-4377-a25e-7f1b74dc48bd" containerName="oc" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.165071 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ae6fc9-6cc7-40e8-a028-33d067e5f092" containerName="registry-server" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.165999 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.177065 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.177548 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.178425 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.180308 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552920-69pcx"] Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.248266 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h\") pod \"auto-csr-approver-29552920-69pcx\" (UID: \"4b33758a-4a8f-4ead-88e5-e2236394b7ff\") " pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.350358 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h\") pod \"auto-csr-approver-29552920-69pcx\" (UID: \"4b33758a-4a8f-4ead-88e5-e2236394b7ff\") " pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.377007 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h\") pod \"auto-csr-approver-29552920-69pcx\" (UID: \"4b33758a-4a8f-4ead-88e5-e2236394b7ff\") " pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:00 crc kubenswrapper[4861]: I0310 20:40:00.536570 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:01 crc kubenswrapper[4861]: I0310 20:40:01.055694 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552920-69pcx"] Mar 10 20:40:01 crc kubenswrapper[4861]: I0310 20:40:01.538315 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552920-69pcx" event={"ID":"4b33758a-4a8f-4ead-88e5-e2236394b7ff","Type":"ContainerStarted","Data":"3b1ac4cdee7d778f8960f3409dc3772869255e5c963507a02b037570367a3638"} Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.561082 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b33758a-4a8f-4ead-88e5-e2236394b7ff" containerID="10315fd671b22a87ce6484fdcaa32763ecfd077b095fd8e76033b69aceb2f097" exitCode=0 Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.561177 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552920-69pcx" event={"ID":"4b33758a-4a8f-4ead-88e5-e2236394b7ff","Type":"ContainerDied","Data":"10315fd671b22a87ce6484fdcaa32763ecfd077b095fd8e76033b69aceb2f097"} Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.577845 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.580550 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.615066 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.715219 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwx96\" (UniqueName: \"kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.715287 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.715395 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.816690 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwx96\" (UniqueName: \"kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.816763 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.816876 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.817363 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.817409 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.845740 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwx96\" (UniqueName: \"kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96\") pod \"community-operators-86ltq\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:03 crc kubenswrapper[4861]: I0310 20:40:03.911697 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:04 crc kubenswrapper[4861]: I0310 20:40:04.378921 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:04 crc kubenswrapper[4861]: W0310 20:40:04.384084 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8bb23f1_53c0_4319_86c5_de0fb99a1e2c.slice/crio-9917aff8fb4617844ee355b7f6ae7ed7eaa88b61d7d4f29a65ad2fa2dbe91f88 WatchSource:0}: Error finding container 9917aff8fb4617844ee355b7f6ae7ed7eaa88b61d7d4f29a65ad2fa2dbe91f88: Status 404 returned error can't find the container with id 9917aff8fb4617844ee355b7f6ae7ed7eaa88b61d7d4f29a65ad2fa2dbe91f88 Mar 10 20:40:04 crc kubenswrapper[4861]: I0310 20:40:04.577526 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerID="14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05" exitCode=0 Mar 10 20:40:04 crc kubenswrapper[4861]: I0310 20:40:04.577589 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerDied","Data":"14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05"} Mar 10 20:40:04 crc kubenswrapper[4861]: I0310 20:40:04.577981 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerStarted","Data":"9917aff8fb4617844ee355b7f6ae7ed7eaa88b61d7d4f29a65ad2fa2dbe91f88"} Mar 10 20:40:04 crc kubenswrapper[4861]: I0310 20:40:04.960402 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.036477 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h\") pod \"4b33758a-4a8f-4ead-88e5-e2236394b7ff\" (UID: \"4b33758a-4a8f-4ead-88e5-e2236394b7ff\") " Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.046344 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h" (OuterVolumeSpecName: "kube-api-access-s9t9h") pod "4b33758a-4a8f-4ead-88e5-e2236394b7ff" (UID: "4b33758a-4a8f-4ead-88e5-e2236394b7ff"). InnerVolumeSpecName "kube-api-access-s9t9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.140187 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9t9h\" (UniqueName: \"kubernetes.io/projected/4b33758a-4a8f-4ead-88e5-e2236394b7ff-kube-api-access-s9t9h\") on node \"crc\" DevicePath \"\"" Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.589855 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552920-69pcx" event={"ID":"4b33758a-4a8f-4ead-88e5-e2236394b7ff","Type":"ContainerDied","Data":"3b1ac4cdee7d778f8960f3409dc3772869255e5c963507a02b037570367a3638"} Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.591043 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1ac4cdee7d778f8960f3409dc3772869255e5c963507a02b037570367a3638" Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.589923 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552920-69pcx" Mar 10 20:40:05 crc kubenswrapper[4861]: I0310 20:40:05.603114 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerStarted","Data":"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6"} Mar 10 20:40:06 crc kubenswrapper[4861]: I0310 20:40:06.048587 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552914-jd5mv"] Mar 10 20:40:06 crc kubenswrapper[4861]: I0310 20:40:06.062236 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552914-jd5mv"] Mar 10 20:40:06 crc kubenswrapper[4861]: I0310 20:40:06.614935 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerID="64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6" exitCode=0 Mar 10 20:40:06 crc kubenswrapper[4861]: I0310 20:40:06.615021 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerDied","Data":"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6"} Mar 10 20:40:06 crc kubenswrapper[4861]: I0310 20:40:06.974973 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4d3afc-0bde-4e92-99bf-7231448cfa1a" path="/var/lib/kubelet/pods/8f4d3afc-0bde-4e92-99bf-7231448cfa1a/volumes" Mar 10 20:40:07 crc kubenswrapper[4861]: I0310 20:40:07.629053 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerStarted","Data":"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b"} Mar 10 20:40:13 crc kubenswrapper[4861]: I0310 20:40:13.912808 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:13 crc kubenswrapper[4861]: I0310 20:40:13.913487 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:13 crc kubenswrapper[4861]: I0310 20:40:13.989361 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:14 crc kubenswrapper[4861]: I0310 20:40:14.020180 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-86ltq" podStartSLOduration=8.567024001 podStartE2EDuration="11.02016269s" podCreationTimestamp="2026-03-10 20:40:03 +0000 UTC" firstStartedPulling="2026-03-10 20:40:04.579218549 +0000 UTC m=+6748.342654549" lastFinishedPulling="2026-03-10 20:40:07.032357238 +0000 UTC m=+6750.795793238" observedRunningTime="2026-03-10 20:40:07.660304869 +0000 UTC m=+6751.423740859" watchObservedRunningTime="2026-03-10 20:40:14.02016269 +0000 UTC m=+6757.783598650" Mar 10 20:40:14 crc kubenswrapper[4861]: I0310 20:40:14.783175 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:14 crc kubenswrapper[4861]: I0310 20:40:14.844316 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:16 crc kubenswrapper[4861]: I0310 20:40:16.728302 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-86ltq" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="registry-server" containerID="cri-o://1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b" gracePeriod=2 Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.646193 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.710366 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content\") pod \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.710608 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities\") pod \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.710808 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwx96\" (UniqueName: \"kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96\") pod \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\" (UID: \"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c\") " Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.712206 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities" (OuterVolumeSpecName: "utilities") pod "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" (UID: "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.720342 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96" (OuterVolumeSpecName: "kube-api-access-kwx96") pod "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" (UID: "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c"). InnerVolumeSpecName "kube-api-access-kwx96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.738897 4861 generic.go:334] "Generic (PLEG): container finished" podID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerID="1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b" exitCode=0 Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.738945 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerDied","Data":"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b"} Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.738963 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86ltq" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.739020 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86ltq" event={"ID":"a8bb23f1-53c0-4319-86c5-de0fb99a1e2c","Type":"ContainerDied","Data":"9917aff8fb4617844ee355b7f6ae7ed7eaa88b61d7d4f29a65ad2fa2dbe91f88"} Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.739043 4861 scope.go:117] "RemoveContainer" containerID="1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.761996 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" (UID: "a8bb23f1-53c0-4319-86c5-de0fb99a1e2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.778655 4861 scope.go:117] "RemoveContainer" containerID="64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.796614 4861 scope.go:117] "RemoveContainer" containerID="14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.812567 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.812606 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwx96\" (UniqueName: \"kubernetes.io/projected/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-kube-api-access-kwx96\") on node \"crc\" DevicePath \"\"" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.812620 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.842319 4861 scope.go:117] "RemoveContainer" containerID="1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b" Mar 10 20:40:17 crc kubenswrapper[4861]: E0310 20:40:17.842740 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b\": container with ID starting with 1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b not found: ID does not exist" containerID="1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.842769 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b"} err="failed to get container status \"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b\": rpc error: code = NotFound desc = could not find container \"1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b\": container with ID starting with 1367bf23533c26e532990d939dfa8476b3ff2405ae3b543465a06041f325a17b not found: ID does not exist" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.842789 4861 scope.go:117] "RemoveContainer" containerID="64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6" Mar 10 20:40:17 crc kubenswrapper[4861]: E0310 20:40:17.843169 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6\": container with ID starting with 64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6 not found: ID does not exist" containerID="64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.843188 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6"} err="failed to get container status \"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6\": rpc error: code = NotFound desc = could not find container \"64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6\": container with ID starting with 64709206a016f29134bbabefe229c631e45438dcc23c1e8a11edad5bae5837b6 not found: ID does not exist" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.843201 4861 scope.go:117] "RemoveContainer" containerID="14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05" Mar 10 20:40:17 crc kubenswrapper[4861]: E0310 20:40:17.843562 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05\": container with ID starting with 14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05 not found: ID does not exist" containerID="14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05" Mar 10 20:40:17 crc kubenswrapper[4861]: I0310 20:40:17.843611 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05"} err="failed to get container status \"14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05\": rpc error: code = NotFound desc = could not find container \"14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05\": container with ID starting with 14a1cfef85a868492c455e2845afa2eb979a68e9c6cb32137278b9b144e08d05 not found: ID does not exist" Mar 10 20:40:18 crc kubenswrapper[4861]: I0310 20:40:18.087704 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:18 crc kubenswrapper[4861]: I0310 20:40:18.103848 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-86ltq"] Mar 10 20:40:18 crc kubenswrapper[4861]: I0310 20:40:18.975799 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" path="/var/lib/kubelet/pods/a8bb23f1-53c0-4319-86c5-de0fb99a1e2c/volumes" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.853913 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snn9m/must-gather-tkvvt"] Mar 10 20:40:19 crc kubenswrapper[4861]: E0310 20:40:19.854494 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="registry-server" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854506 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="registry-server" Mar 10 20:40:19 crc kubenswrapper[4861]: E0310 20:40:19.854534 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="extract-utilities" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854541 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="extract-utilities" Mar 10 20:40:19 crc kubenswrapper[4861]: E0310 20:40:19.854554 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="extract-content" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854561 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="extract-content" Mar 10 20:40:19 crc kubenswrapper[4861]: E0310 20:40:19.854575 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b33758a-4a8f-4ead-88e5-e2236394b7ff" containerName="oc" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854581 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b33758a-4a8f-4ead-88e5-e2236394b7ff" containerName="oc" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854735 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b33758a-4a8f-4ead-88e5-e2236394b7ff" containerName="oc" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.854749 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bb23f1-53c0-4319-86c5-de0fb99a1e2c" containerName="registry-server" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.855648 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.857526 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-snn9m"/"openshift-service-ca.crt" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.857666 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-snn9m"/"kube-root-ca.crt" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.857772 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-snn9m"/"default-dockercfg-7r675" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.865731 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-snn9m/must-gather-tkvvt"] Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.953426 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsslk\" (UniqueName: \"kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:19 crc kubenswrapper[4861]: I0310 20:40:19.953491 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.055907 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsslk\" (UniqueName: \"kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.055979 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.057295 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.083108 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsslk\" (UniqueName: \"kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk\") pod \"must-gather-tkvvt\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.185528 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.745999 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-snn9m/must-gather-tkvvt"] Mar 10 20:40:20 crc kubenswrapper[4861]: W0310 20:40:20.753361 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b079180_4407_4d2c_9b5a_fdd7ce9ca6ea.slice/crio-1b62f69046a2346bef7f415fd814278d7a962eff38cd7aee650f00379a128a8f WatchSource:0}: Error finding container 1b62f69046a2346bef7f415fd814278d7a962eff38cd7aee650f00379a128a8f: Status 404 returned error can't find the container with id 1b62f69046a2346bef7f415fd814278d7a962eff38cd7aee650f00379a128a8f Mar 10 20:40:20 crc kubenswrapper[4861]: I0310 20:40:20.762282 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/must-gather-tkvvt" event={"ID":"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea","Type":"ContainerStarted","Data":"1b62f69046a2346bef7f415fd814278d7a962eff38cd7aee650f00379a128a8f"} Mar 10 20:40:22 crc kubenswrapper[4861]: I0310 20:40:21.991829 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:40:22 crc kubenswrapper[4861]: I0310 20:40:21.992466 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:40:27 crc kubenswrapper[4861]: I0310 20:40:27.860902 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/must-gather-tkvvt" event={"ID":"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea","Type":"ContainerStarted","Data":"c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9"} Mar 10 20:40:27 crc kubenswrapper[4861]: I0310 20:40:27.861665 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/must-gather-tkvvt" event={"ID":"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea","Type":"ContainerStarted","Data":"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec"} Mar 10 20:40:27 crc kubenswrapper[4861]: I0310 20:40:27.888249 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-snn9m/must-gather-tkvvt" podStartSLOduration=2.502132194 podStartE2EDuration="8.888221183s" podCreationTimestamp="2026-03-10 20:40:19 +0000 UTC" firstStartedPulling="2026-03-10 20:40:20.755933453 +0000 UTC m=+6764.519369413" lastFinishedPulling="2026-03-10 20:40:27.142022432 +0000 UTC m=+6770.905458402" observedRunningTime="2026-03-10 20:40:27.879491418 +0000 UTC m=+6771.642927448" watchObservedRunningTime="2026-03-10 20:40:27.888221183 +0000 UTC m=+6771.651657163" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.258899 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snn9m/crc-debug-dpt5h"] Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.260636 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.363806 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlwn\" (UniqueName: \"kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.364159 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.465521 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlwn\" (UniqueName: \"kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.465589 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.465686 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.491916 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlwn\" (UniqueName: \"kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn\") pod \"crc-debug-dpt5h\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.592705 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:40:30 crc kubenswrapper[4861]: W0310 20:40:30.623410 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3869ad8f_a7a1_42f4_862b_75c4fbe577d8.slice/crio-ee1db3c808641d70184715322e9d7084418f657cc65e26c1f5f82a971447b4f4 WatchSource:0}: Error finding container ee1db3c808641d70184715322e9d7084418f657cc65e26c1f5f82a971447b4f4: Status 404 returned error can't find the container with id ee1db3c808641d70184715322e9d7084418f657cc65e26c1f5f82a971447b4f4 Mar 10 20:40:30 crc kubenswrapper[4861]: I0310 20:40:30.883575 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" event={"ID":"3869ad8f-a7a1-42f4-862b-75c4fbe577d8","Type":"ContainerStarted","Data":"ee1db3c808641d70184715322e9d7084418f657cc65e26c1f5f82a971447b4f4"} Mar 10 20:40:41 crc kubenswrapper[4861]: I0310 20:40:41.965098 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" event={"ID":"3869ad8f-a7a1-42f4-862b-75c4fbe577d8","Type":"ContainerStarted","Data":"6c05aab7c7eed6e6c5fec874ac52855244cb5d6e99c945cc08b21df674e75237"} Mar 10 20:40:41 crc kubenswrapper[4861]: I0310 20:40:41.984584 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" podStartSLOduration=1.254155677 podStartE2EDuration="11.984573294s" podCreationTimestamp="2026-03-10 20:40:30 +0000 UTC" firstStartedPulling="2026-03-10 20:40:30.628366075 +0000 UTC m=+6774.391802025" lastFinishedPulling="2026-03-10 20:40:41.358783692 +0000 UTC m=+6785.122219642" observedRunningTime="2026-03-10 20:40:41.979421875 +0000 UTC m=+6785.742857835" watchObservedRunningTime="2026-03-10 20:40:41.984573294 +0000 UTC m=+6785.748009254" Mar 10 20:40:51 crc kubenswrapper[4861]: I0310 20:40:51.991784 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:40:51 crc kubenswrapper[4861]: I0310 20:40:51.992483 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:41:00 crc kubenswrapper[4861]: I0310 20:41:00.123245 4861 generic.go:334] "Generic (PLEG): container finished" podID="3869ad8f-a7a1-42f4-862b-75c4fbe577d8" containerID="6c05aab7c7eed6e6c5fec874ac52855244cb5d6e99c945cc08b21df674e75237" exitCode=0 Mar 10 20:41:00 crc kubenswrapper[4861]: I0310 20:41:00.123355 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" event={"ID":"3869ad8f-a7a1-42f4-862b-75c4fbe577d8","Type":"ContainerDied","Data":"6c05aab7c7eed6e6c5fec874ac52855244cb5d6e99c945cc08b21df674e75237"} Mar 10 20:41:00 crc kubenswrapper[4861]: I0310 20:41:00.221687 4861 scope.go:117] "RemoveContainer" containerID="99cb18c8ebff5d1a0be9c3976401477c433092e4c63f60138de888fcd281add1" Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.247837 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.288850 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snn9m/crc-debug-dpt5h"] Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.293300 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snn9m/crc-debug-dpt5h"] Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.409440 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host\") pod \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.409541 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host" (OuterVolumeSpecName: "host") pod "3869ad8f-a7a1-42f4-862b-75c4fbe577d8" (UID: "3869ad8f-a7a1-42f4-862b-75c4fbe577d8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.409614 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rlwn\" (UniqueName: \"kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn\") pod \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\" (UID: \"3869ad8f-a7a1-42f4-862b-75c4fbe577d8\") " Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.410089 4861 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-host\") on node \"crc\" DevicePath \"\"" Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.414076 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn" (OuterVolumeSpecName: "kube-api-access-4rlwn") pod "3869ad8f-a7a1-42f4-862b-75c4fbe577d8" (UID: "3869ad8f-a7a1-42f4-862b-75c4fbe577d8"). InnerVolumeSpecName "kube-api-access-4rlwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:41:01 crc kubenswrapper[4861]: I0310 20:41:01.512333 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rlwn\" (UniqueName: \"kubernetes.io/projected/3869ad8f-a7a1-42f4-862b-75c4fbe577d8-kube-api-access-4rlwn\") on node \"crc\" DevicePath \"\"" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.147252 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1db3c808641d70184715322e9d7084418f657cc65e26c1f5f82a971447b4f4" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.147367 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-dpt5h" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.530107 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-snn9m/crc-debug-pn8hv"] Mar 10 20:41:02 crc kubenswrapper[4861]: E0310 20:41:02.530810 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3869ad8f-a7a1-42f4-862b-75c4fbe577d8" containerName="container-00" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.530844 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="3869ad8f-a7a1-42f4-862b-75c4fbe577d8" containerName="container-00" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.531261 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="3869ad8f-a7a1-42f4-862b-75c4fbe577d8" containerName="container-00" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.532454 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.632887 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf58q\" (UniqueName: \"kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.632941 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.734512 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf58q\" (UniqueName: \"kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.734646 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.734829 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.771772 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf58q\" (UniqueName: \"kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q\") pod \"crc-debug-pn8hv\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.859487 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:02 crc kubenswrapper[4861]: I0310 20:41:02.972104 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3869ad8f-a7a1-42f4-862b-75c4fbe577d8" path="/var/lib/kubelet/pods/3869ad8f-a7a1-42f4-862b-75c4fbe577d8/volumes" Mar 10 20:41:03 crc kubenswrapper[4861]: I0310 20:41:03.157563 4861 generic.go:334] "Generic (PLEG): container finished" podID="f33216c9-13ea-4613-84a7-1659a29e9569" containerID="bb73116e053203376783efd7ad405a4d84d39fc5d30cc848b10a40d6e0c1a6da" exitCode=1 Mar 10 20:41:03 crc kubenswrapper[4861]: I0310 20:41:03.157629 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" event={"ID":"f33216c9-13ea-4613-84a7-1659a29e9569","Type":"ContainerDied","Data":"bb73116e053203376783efd7ad405a4d84d39fc5d30cc848b10a40d6e0c1a6da"} Mar 10 20:41:03 crc kubenswrapper[4861]: I0310 20:41:03.157660 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" event={"ID":"f33216c9-13ea-4613-84a7-1659a29e9569","Type":"ContainerStarted","Data":"504e191b2d3fe62ef254f28700c178baf172ef75d010211e70d5b27d38e5ecf3"} Mar 10 20:41:03 crc kubenswrapper[4861]: I0310 20:41:03.206691 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snn9m/crc-debug-pn8hv"] Mar 10 20:41:03 crc kubenswrapper[4861]: I0310 20:41:03.213228 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snn9m/crc-debug-pn8hv"] Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.235772 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.362162 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf58q\" (UniqueName: \"kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q\") pod \"f33216c9-13ea-4613-84a7-1659a29e9569\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.362233 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host\") pod \"f33216c9-13ea-4613-84a7-1659a29e9569\" (UID: \"f33216c9-13ea-4613-84a7-1659a29e9569\") " Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.362299 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host" (OuterVolumeSpecName: "host") pod "f33216c9-13ea-4613-84a7-1659a29e9569" (UID: "f33216c9-13ea-4613-84a7-1659a29e9569"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.362918 4861 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f33216c9-13ea-4613-84a7-1659a29e9569-host\") on node \"crc\" DevicePath \"\"" Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.369630 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q" (OuterVolumeSpecName: "kube-api-access-jf58q") pod "f33216c9-13ea-4613-84a7-1659a29e9569" (UID: "f33216c9-13ea-4613-84a7-1659a29e9569"). InnerVolumeSpecName "kube-api-access-jf58q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.464459 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf58q\" (UniqueName: \"kubernetes.io/projected/f33216c9-13ea-4613-84a7-1659a29e9569-kube-api-access-jf58q\") on node \"crc\" DevicePath \"\"" Mar 10 20:41:04 crc kubenswrapper[4861]: I0310 20:41:04.969115 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33216c9-13ea-4613-84a7-1659a29e9569" path="/var/lib/kubelet/pods/f33216c9-13ea-4613-84a7-1659a29e9569/volumes" Mar 10 20:41:05 crc kubenswrapper[4861]: I0310 20:41:05.177553 4861 scope.go:117] "RemoveContainer" containerID="bb73116e053203376783efd7ad405a4d84d39fc5d30cc848b10a40d6e0c1a6da" Mar 10 20:41:05 crc kubenswrapper[4861]: I0310 20:41:05.177623 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/crc-debug-pn8hv" Mar 10 20:41:21 crc kubenswrapper[4861]: I0310 20:41:21.992584 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:41:21 crc kubenswrapper[4861]: I0310 20:41:21.993131 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:41:21 crc kubenswrapper[4861]: I0310 20:41:21.993175 4861 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" Mar 10 20:41:21 crc kubenswrapper[4861]: I0310 20:41:21.993855 4861 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4"} pod="openshift-machine-config-operator/machine-config-daemon-qttbr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 10 20:41:21 crc kubenswrapper[4861]: I0310 20:41:21.993903 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" containerID="cri-o://5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" gracePeriod=600 Mar 10 20:41:22 crc kubenswrapper[4861]: E0310 20:41:22.118998 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:41:22 crc kubenswrapper[4861]: I0310 20:41:22.950349 4861 generic.go:334] "Generic (PLEG): container finished" podID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" exitCode=0 Mar 10 20:41:22 crc kubenswrapper[4861]: I0310 20:41:22.950413 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerDied","Data":"5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4"} Mar 10 20:41:22 crc kubenswrapper[4861]: I0310 20:41:22.950478 4861 scope.go:117] "RemoveContainer" containerID="cac8ff2a8e9486664814c9d4c374eb5ab479b9677f123c79806e6c10d880732d" Mar 10 20:41:22 crc kubenswrapper[4861]: I0310 20:41:22.951207 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:41:22 crc kubenswrapper[4861]: E0310 20:41:22.951686 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:41:25 crc kubenswrapper[4861]: I0310 20:41:25.686985 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56bc48dc8c-cqg68_33217fff-0cbb-41fc-bf58-cbe8324c4aab/init/0.log" Mar 10 20:41:25 crc kubenswrapper[4861]: I0310 20:41:25.850102 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56bc48dc8c-cqg68_33217fff-0cbb-41fc-bf58-cbe8324c4aab/dnsmasq-dns/0.log" Mar 10 20:41:25 crc kubenswrapper[4861]: I0310 20:41:25.869862 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-56bc48dc8c-cqg68_33217fff-0cbb-41fc-bf58-cbe8324c4aab/init/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.032931 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_40721062-412e-4a4f-83b6-2e0692251cc0/adoption/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.045991 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f4454dc7f-r2rxb_a1c3c19f-4213-434d-add9-eae0f9cdaadc/keystone-api/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.339410 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e13982bd-7f6a-4a64-8d3f-9a9ea29a4918/mysql-bootstrap/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.495361 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e13982bd-7f6a-4a64-8d3f-9a9ea29a4918/galera/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.551286 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e13982bd-7f6a-4a64-8d3f-9a9ea29a4918/mysql-bootstrap/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.685153 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2b59736f-62c4-499c-abad-c9ab4705c2ed/mysql-bootstrap/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.887390 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2b59736f-62c4-499c-abad-c9ab4705c2ed/galera/0.log" Mar 10 20:41:26 crc kubenswrapper[4861]: I0310 20:41:26.906883 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2b59736f-62c4-499c-abad-c9ab4705c2ed/mysql-bootstrap/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.073024 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e79e436f-8808-4985-8359-2d6c89e65aef/openstackclient/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.168901 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_515ad971-29e5-4b2a-ab58-2e06d4d29c99/adoption/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.342980 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_da0f7a17-f584-4acc-8fdd-96bb631bce12/openstack-network-exporter/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.366392 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_da0f7a17-f584-4acc-8fdd-96bb631bce12/ovn-northd/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.505801 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6937f060-bbcc-427e-9717-8c96952c10c0/openstack-network-exporter/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.584224 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6937f060-bbcc-427e-9717-8c96952c10c0/ovsdbserver-nb/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.624248 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_87e211e3-d481-4f80-939c-ace6358f3851/memcached/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.700997 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_597cad59-a9fa-4933-83cd-9df822b8c2c7/openstack-network-exporter/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.741404 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_597cad59-a9fa-4933-83cd-9df822b8c2c7/ovsdbserver-nb/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.856793 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c6987688-b05a-4f80-bfbb-7062f55147d8/openstack-network-exporter/0.log" Mar 10 20:41:27 crc kubenswrapper[4861]: I0310 20:41:27.941511 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c6987688-b05a-4f80-bfbb-7062f55147d8/ovsdbserver-nb/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.025136 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3163e8d-4aaa-4b26-bb68-40bfed734f01/openstack-network-exporter/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.035156 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3163e8d-4aaa-4b26-bb68-40bfed734f01/ovsdbserver-sb/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.138684 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_28253164-2f24-45d4-8efd-26627728ca52/openstack-network-exporter/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.197238 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_28253164-2f24-45d4-8efd-26627728ca52/ovsdbserver-sb/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.293275 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_2d5db324-92cd-450e-8f74-0d2df72352ba/openstack-network-exporter/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.318952 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_2d5db324-92cd-450e-8f74-0d2df72352ba/ovsdbserver-sb/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.452643 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4e85b342-2781-46f8-94e7-c36fd2de2f23/setup-container/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.626729 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4e85b342-2781-46f8-94e7-c36fd2de2f23/rabbitmq/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.653701 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1/setup-container/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.656339 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4e85b342-2781-46f8-94e7-c36fd2de2f23/setup-container/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.834176 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1/rabbitmq/0.log" Mar 10 20:41:28 crc kubenswrapper[4861]: I0310 20:41:28.864635 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c06bc3bf-61a5-4fe5-a9d4-2f37b4b85ca1/setup-container/0.log" Mar 10 20:41:35 crc kubenswrapper[4861]: I0310 20:41:35.958085 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:41:35 crc kubenswrapper[4861]: E0310 20:41:35.958736 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.223018 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/util/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.402779 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/util/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.425040 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/pull/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.451850 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/pull/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.520098 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/util/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.581725 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/pull/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.593498 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49bahbg8b_798cdc73-2626-4da6-8104-ba0fe4ec829d/extract/0.log" Mar 10 20:41:46 crc kubenswrapper[4861]: I0310 20:41:46.967917 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-4mh8f_1596b973-4b23-48a6-9924-8e98b7535a61/manager/0.log" Mar 10 20:41:47 crc kubenswrapper[4861]: I0310 20:41:47.240746 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-9bhqz_94ef6e9b-1270-4992-9d58-902e82b52294/manager/0.log" Mar 10 20:41:47 crc kubenswrapper[4861]: I0310 20:41:47.377587 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-q76xg_56a6874c-f929-4490-a247-52542d4aa8f1/manager/0.log" Mar 10 20:41:47 crc kubenswrapper[4861]: I0310 20:41:47.556716 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-265hs_ce725c26-b020-402f-ad0c-ae44f307e21e/manager/0.log" Mar 10 20:41:48 crc kubenswrapper[4861]: I0310 20:41:48.052275 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-bdscg_16f833b1-3203-4532-a5f4-bc8769cd0932/manager/0.log" Mar 10 20:41:48 crc kubenswrapper[4861]: I0310 20:41:48.284670 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-z66mq_acdc9485-305f-401f-91bb-749a9e1e3c89/manager/0.log" Mar 10 20:41:48 crc kubenswrapper[4861]: I0310 20:41:48.533498 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-mf7c8_082164ca-006f-4217-a664-a81f24fb7f9c/manager/0.log" Mar 10 20:41:48 crc kubenswrapper[4861]: I0310 20:41:48.662958 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-864bh_ee7bce9d-5057-4eb7-af07-af66a5bc7473/manager/0.log" Mar 10 20:41:48 crc kubenswrapper[4861]: I0310 20:41:48.909591 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-qxj54_e05b8481-3bf6-416e-a404-61ce227c2350/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.134434 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-zds26_b4bddf8d-6696-491b-9d79-e057d2d18c14/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.466821 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-qvqhp_9f600d22-c94c-4d46-b024-d4674aca2d8d/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.597191 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-wvkvr_8b85720c-9adc-46f0-835b-3a1709df2126/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.613652 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-d7ph4_239d1170-1e62-44e7-a07d-10ca9adfa28e/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.716016 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6647d7885fkj9zr_1f2fa0f6-38b3-4764-ab09-dc4064f986fe/manager/0.log" Mar 10 20:41:49 crc kubenswrapper[4861]: I0310 20:41:49.975106 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6cf8df7788-wv4t4_33055dc2-3f37-47e1-9550-f601f76f9b2a/operator/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.260351 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-kfhjj_0f595f08-5e94-415c-929c-d8c076efa590/registry-server/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.292468 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-82tsp_b2313526-2f50-43bc-b6cd-45da343f91ae/manager/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.454523 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-sh4lp_a55e1a2b-1e10-41ee-b376-c879ec07d2af/manager/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.562274 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9bfll_19b27de0-6f4a-4017-aec7-eea80076189c/operator/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.733212 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-dgkn6_82bb6884-6681-4382-b784-feadaf2a891f/manager/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.930540 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-5fzrs_425d8351-8597-4696-99f6-8ce744082d84/manager/0.log" Mar 10 20:41:50 crc kubenswrapper[4861]: I0310 20:41:50.957889 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:41:50 crc kubenswrapper[4861]: E0310 20:41:50.958103 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:41:51 crc kubenswrapper[4861]: I0310 20:41:51.008848 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-kttwb_aa17b421-616d-4842-aefd-f21a47952272/manager/0.log" Mar 10 20:41:51 crc kubenswrapper[4861]: I0310 20:41:51.078965 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6679ddfdc7-5sqmh_33e7a010-7cd8-4eef-b19f-30ea72eb0a03/manager/0.log" Mar 10 20:41:51 crc kubenswrapper[4861]: I0310 20:41:51.188624 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-72px6_3e4cfd72-8809-4af1-90da-66587816e46e/manager/0.log" Mar 10 20:41:57 crc kubenswrapper[4861]: I0310 20:41:57.351639 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-rzrtw_54044555-52fe-44c6-9e47-7f2748a8b114/manager/0.log" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.161653 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552922-bg285"] Mar 10 20:42:00 crc kubenswrapper[4861]: E0310 20:42:00.162414 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33216c9-13ea-4613-84a7-1659a29e9569" containerName="container-00" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.162447 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33216c9-13ea-4613-84a7-1659a29e9569" containerName="container-00" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.162907 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33216c9-13ea-4613-84a7-1659a29e9569" containerName="container-00" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.164123 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.166998 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.168101 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.168130 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.202655 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552922-bg285"] Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.242076 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqmbt\" (UniqueName: \"kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt\") pod \"auto-csr-approver-29552922-bg285\" (UID: \"dcd75834-2331-46e3-b1d2-4c393f5a45d2\") " pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.344797 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmbt\" (UniqueName: \"kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt\") pod \"auto-csr-approver-29552922-bg285\" (UID: \"dcd75834-2331-46e3-b1d2-4c393f5a45d2\") " pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.373656 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmbt\" (UniqueName: \"kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt\") pod \"auto-csr-approver-29552922-bg285\" (UID: \"dcd75834-2331-46e3-b1d2-4c393f5a45d2\") " pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:00 crc kubenswrapper[4861]: I0310 20:42:00.495731 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:01 crc kubenswrapper[4861]: I0310 20:42:01.038374 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552922-bg285"] Mar 10 20:42:01 crc kubenswrapper[4861]: I0310 20:42:01.243738 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552922-bg285" event={"ID":"dcd75834-2331-46e3-b1d2-4c393f5a45d2","Type":"ContainerStarted","Data":"ca5146c2f099e2a6871ddc249c04ed10c95aa94e55209f8e5e22106b0f55e936"} Mar 10 20:42:02 crc kubenswrapper[4861]: I0310 20:42:02.965444 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:42:02 crc kubenswrapper[4861]: E0310 20:42:02.967209 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:42:03 crc kubenswrapper[4861]: I0310 20:42:03.300483 4861 generic.go:334] "Generic (PLEG): container finished" podID="dcd75834-2331-46e3-b1d2-4c393f5a45d2" containerID="d8edfa3a7dfe23a22ca6be4b0db2bb9b74247dd7c0f38b59de2da9d087721955" exitCode=0 Mar 10 20:42:03 crc kubenswrapper[4861]: I0310 20:42:03.300545 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552922-bg285" event={"ID":"dcd75834-2331-46e3-b1d2-4c393f5a45d2","Type":"ContainerDied","Data":"d8edfa3a7dfe23a22ca6be4b0db2bb9b74247dd7c0f38b59de2da9d087721955"} Mar 10 20:42:04 crc kubenswrapper[4861]: I0310 20:42:04.654084 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:04 crc kubenswrapper[4861]: I0310 20:42:04.830300 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqmbt\" (UniqueName: \"kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt\") pod \"dcd75834-2331-46e3-b1d2-4c393f5a45d2\" (UID: \"dcd75834-2331-46e3-b1d2-4c393f5a45d2\") " Mar 10 20:42:04 crc kubenswrapper[4861]: I0310 20:42:04.836853 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt" (OuterVolumeSpecName: "kube-api-access-tqmbt") pod "dcd75834-2331-46e3-b1d2-4c393f5a45d2" (UID: "dcd75834-2331-46e3-b1d2-4c393f5a45d2"). InnerVolumeSpecName "kube-api-access-tqmbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:42:04 crc kubenswrapper[4861]: I0310 20:42:04.932622 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqmbt\" (UniqueName: \"kubernetes.io/projected/dcd75834-2331-46e3-b1d2-4c393f5a45d2-kube-api-access-tqmbt\") on node \"crc\" DevicePath \"\"" Mar 10 20:42:05 crc kubenswrapper[4861]: I0310 20:42:05.329805 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552922-bg285" event={"ID":"dcd75834-2331-46e3-b1d2-4c393f5a45d2","Type":"ContainerDied","Data":"ca5146c2f099e2a6871ddc249c04ed10c95aa94e55209f8e5e22106b0f55e936"} Mar 10 20:42:05 crc kubenswrapper[4861]: I0310 20:42:05.329862 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5146c2f099e2a6871ddc249c04ed10c95aa94e55209f8e5e22106b0f55e936" Mar 10 20:42:05 crc kubenswrapper[4861]: I0310 20:42:05.330190 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552922-bg285" Mar 10 20:42:05 crc kubenswrapper[4861]: I0310 20:42:05.742082 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552916-x8lgz"] Mar 10 20:42:05 crc kubenswrapper[4861]: I0310 20:42:05.754449 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552916-x8lgz"] Mar 10 20:42:06 crc kubenswrapper[4861]: I0310 20:42:06.975812 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66623e4-8467-42a1-bd66-cbd8a720deb2" path="/var/lib/kubelet/pods/e66623e4-8467-42a1-bd66-cbd8a720deb2/volumes" Mar 10 20:42:12 crc kubenswrapper[4861]: I0310 20:42:12.567385 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dwxk9_956b8171-6837-47bf-9f44-7f9cc43a1e30/control-plane-machine-set-operator/0.log" Mar 10 20:42:12 crc kubenswrapper[4861]: I0310 20:42:12.671401 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwxrl_dd17f745-4fce-4459-a068-94313e612723/kube-rbac-proxy/0.log" Mar 10 20:42:12 crc kubenswrapper[4861]: I0310 20:42:12.757563 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gwxrl_dd17f745-4fce-4459-a068-94313e612723/machine-api-operator/0.log" Mar 10 20:42:13 crc kubenswrapper[4861]: I0310 20:42:13.958226 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:42:13 crc kubenswrapper[4861]: E0310 20:42:13.958514 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:42:27 crc kubenswrapper[4861]: I0310 20:42:27.545487 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-hkkq9_17cbe57a-85b5-4996-b7ad-c43119116d78/cert-manager-controller/0.log" Mar 10 20:42:27 crc kubenswrapper[4861]: I0310 20:42:27.697451 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-v4x8j_b7207043-ee01-49f4-b006-fa5a3a671508/cert-manager-cainjector/0.log" Mar 10 20:42:27 crc kubenswrapper[4861]: I0310 20:42:27.749205 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-7lqrv_5a8cb2e6-0b14-4b01-9515-43bba8e79f1b/cert-manager-webhook/0.log" Mar 10 20:42:28 crc kubenswrapper[4861]: I0310 20:42:28.963031 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:42:28 crc kubenswrapper[4861]: E0310 20:42:28.963284 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:42:40 crc kubenswrapper[4861]: I0310 20:42:40.958834 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:42:40 crc kubenswrapper[4861]: E0310 20:42:40.960638 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.072982 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-q2p6w_31bb99e7-c4f3-4a27-99a8-e81527592813/nmstate-console-plugin/0.log" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.256321 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ptwwk_885ff0f1-fcc9-4add-8284-2af1e15a4c2f/nmstate-handler/0.log" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.317737 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-grnnr_1a246ed4-9906-4d0b-85bd-fab444e3e5e8/kube-rbac-proxy/0.log" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.373972 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-grnnr_1a246ed4-9906-4d0b-85bd-fab444e3e5e8/nmstate-metrics/0.log" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.453460 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-mxv85_ecb4ad72-c43c-4a90-8808-92ca31191f2c/nmstate-operator/0.log" Mar 10 20:42:42 crc kubenswrapper[4861]: I0310 20:42:42.544064 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-l6z4v_511c207e-fabc-45ab-8da9-287d3a7d3889/nmstate-webhook/0.log" Mar 10 20:42:51 crc kubenswrapper[4861]: I0310 20:42:51.958414 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:42:51 crc kubenswrapper[4861]: E0310 20:42:51.959245 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:43:00 crc kubenswrapper[4861]: I0310 20:43:00.423573 4861 scope.go:117] "RemoveContainer" containerID="e886cbe3a5d71a95f13e689cadf7cdb6267e12b8a669007ca25923b5984e4fec" Mar 10 20:43:04 crc kubenswrapper[4861]: I0310 20:43:04.959437 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:43:04 crc kubenswrapper[4861]: E0310 20:43:04.960618 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:43:14 crc kubenswrapper[4861]: I0310 20:43:14.840434 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-zfmmn_25270671-6926-47af-bb35-43c48308f5fd/kube-rbac-proxy/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.090944 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-frr-files/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.256984 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-frr-files/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.265144 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-reloader/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.304023 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-zfmmn_25270671-6926-47af-bb35-43c48308f5fd/controller/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.307805 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-metrics/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.437473 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-reloader/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.591015 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-reloader/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.633110 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-metrics/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.649508 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-frr-files/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.652113 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-metrics/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.779767 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-frr-files/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.824522 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/controller/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.828472 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-reloader/0.log" Mar 10 20:43:15 crc kubenswrapper[4861]: I0310 20:43:15.844631 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/cp-metrics/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:15.997780 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/frr-metrics/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.002850 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/kube-rbac-proxy/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.072044 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/kube-rbac-proxy-frr/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.210317 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/reloader/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.261491 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-mbb6b_b7c09d64-564e-4dad-91c3-ecaf75f1a6a4/frr-k8s-webhook-server/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.406196 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-569d9ccd5-tf6qj_71fe4ef2-5b4c-4842-978e-3fa1b4d71ade/manager/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.567584 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79b7ddc8f8-stpjm_a772e923-abe7-448d-978c-de1cf0020a82/webhook-server/0.log" Mar 10 20:43:16 crc kubenswrapper[4861]: I0310 20:43:16.691340 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bvtm5_f8951eba-63cf-4bd6-a2d6-6829c198ac80/kube-rbac-proxy/0.log" Mar 10 20:43:17 crc kubenswrapper[4861]: I0310 20:43:17.244363 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bvtm5_f8951eba-63cf-4bd6-a2d6-6829c198ac80/speaker/0.log" Mar 10 20:43:17 crc kubenswrapper[4861]: I0310 20:43:17.885136 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mxnp2_d76161e5-e488-4365-976f-5487ba4fa265/frr/0.log" Mar 10 20:43:18 crc kubenswrapper[4861]: I0310 20:43:18.958063 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:43:18 crc kubenswrapper[4861]: E0310 20:43:18.958497 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.225297 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/util/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.359215 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/util/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.359722 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/pull/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.439759 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/pull/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.530758 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/util/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.535648 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/pull/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.557973 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rmvb5_49257ab5-4d09-4d8b-8f27-e42873f65b4d/extract/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.744273 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/util/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.889153 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/pull/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.911913 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/util/0.log" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.958023 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:43:32 crc kubenswrapper[4861]: E0310 20:43:32.958243 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:43:32 crc kubenswrapper[4861]: I0310 20:43:32.962809 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/pull/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.101296 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/util/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.116834 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/extract/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.157934 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sbjzt_1d40514c-e06d-4c94-a8fb-17a30c1755a8/pull/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.281295 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-utilities/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.569076 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-content/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.580685 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-utilities/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.613821 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-content/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.777602 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-utilities/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.851200 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/extract-content/0.log" Mar 10 20:43:33 crc kubenswrapper[4861]: I0310 20:43:33.967154 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-utilities/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.156921 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-utilities/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.200411 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-content/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.219509 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-content/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.412792 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-content/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.439013 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/extract-utilities/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.644629 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/util/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.715412 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kmxlp_2354bfd0-6f1e-47c3-b79f-614d201635e8/registry-server/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.918771 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/pull/0.log" Mar 10 20:43:34 crc kubenswrapper[4861]: I0310 20:43:34.926487 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/util/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.021505 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/pull/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.287640 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/extract/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.289047 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/util/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.290258 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49ljx8_53bcff5b-e791-43cc-a898-51474367544d/pull/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.381754 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qk6mz_cad2f2f8-05db-4def-ac2f-1bfbde174dfe/registry-server/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.464639 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q9slc_de5224d4-6cff-4974-a40c-9d5d44ee55fb/marketplace-operator/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.524132 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-utilities/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.701799 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-utilities/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.730918 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-content/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.746863 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-content/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.883400 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-content/0.log" Mar 10 20:43:35 crc kubenswrapper[4861]: I0310 20:43:35.910649 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/extract-utilities/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.080507 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-utilities/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.081582 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtm67_9dc81a19-65da-4af1-aff0-a0d38434d370/registry-server/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.182066 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-content/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.198168 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-content/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.201642 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-utilities/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.343524 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-utilities/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.350591 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/extract-content/0.log" Mar 10 20:43:36 crc kubenswrapper[4861]: I0310 20:43:36.504886 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8gkzl_0d359be0-9080-42cc-9d49-2e7604bab469/registry-server/0.log" Mar 10 20:43:43 crc kubenswrapper[4861]: I0310 20:43:43.957671 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:43:43 crc kubenswrapper[4861]: E0310 20:43:43.958592 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:43:55 crc kubenswrapper[4861]: I0310 20:43:55.958925 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:43:55 crc kubenswrapper[4861]: E0310 20:43:55.959810 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.177143 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552924-67n62"] Mar 10 20:44:00 crc kubenswrapper[4861]: E0310 20:44:00.178059 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd75834-2331-46e3-b1d2-4c393f5a45d2" containerName="oc" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.178081 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd75834-2331-46e3-b1d2-4c393f5a45d2" containerName="oc" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.178401 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd75834-2331-46e3-b1d2-4c393f5a45d2" containerName="oc" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.179325 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.180977 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.182425 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.183514 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.188646 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552924-67n62"] Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.302352 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2rq\" (UniqueName: \"kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq\") pod \"auto-csr-approver-29552924-67n62\" (UID: \"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0\") " pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.407947 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2rq\" (UniqueName: \"kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq\") pod \"auto-csr-approver-29552924-67n62\" (UID: \"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0\") " pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.433639 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2rq\" (UniqueName: \"kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq\") pod \"auto-csr-approver-29552924-67n62\" (UID: \"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0\") " pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:00 crc kubenswrapper[4861]: I0310 20:44:00.499404 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:01 crc kubenswrapper[4861]: I0310 20:44:01.090493 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552924-67n62"] Mar 10 20:44:01 crc kubenswrapper[4861]: I0310 20:44:01.104119 4861 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 10 20:44:01 crc kubenswrapper[4861]: I0310 20:44:01.344538 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552924-67n62" event={"ID":"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0","Type":"ContainerStarted","Data":"fc5016b8a7effe1aefb2ab646801bc4aade1be815e232d309d48c62ee1a73e9e"} Mar 10 20:44:03 crc kubenswrapper[4861]: I0310 20:44:03.379289 4861 generic.go:334] "Generic (PLEG): container finished" podID="95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0" containerID="d7f404cb579ef1712c1a586eb077310f7a093e234d6f750c9fece20f621fafd9" exitCode=0 Mar 10 20:44:03 crc kubenswrapper[4861]: I0310 20:44:03.379402 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552924-67n62" event={"ID":"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0","Type":"ContainerDied","Data":"d7f404cb579ef1712c1a586eb077310f7a093e234d6f750c9fece20f621fafd9"} Mar 10 20:44:04 crc kubenswrapper[4861]: I0310 20:44:04.846519 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.001183 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2rq\" (UniqueName: \"kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq\") pod \"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0\" (UID: \"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0\") " Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.016242 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq" (OuterVolumeSpecName: "kube-api-access-kl2rq") pod "95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0" (UID: "95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0"). InnerVolumeSpecName "kube-api-access-kl2rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.104500 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl2rq\" (UniqueName: \"kubernetes.io/projected/95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0-kube-api-access-kl2rq\") on node \"crc\" DevicePath \"\"" Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.398508 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552924-67n62" event={"ID":"95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0","Type":"ContainerDied","Data":"fc5016b8a7effe1aefb2ab646801bc4aade1be815e232d309d48c62ee1a73e9e"} Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.398774 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5016b8a7effe1aefb2ab646801bc4aade1be815e232d309d48c62ee1a73e9e" Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.398831 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552924-67n62" Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.916914 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552918-c456h"] Mar 10 20:44:05 crc kubenswrapper[4861]: I0310 20:44:05.925422 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552918-c456h"] Mar 10 20:44:06 crc kubenswrapper[4861]: I0310 20:44:06.968268 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9352dac9-5c21-4377-a25e-7f1b74dc48bd" path="/var/lib/kubelet/pods/9352dac9-5c21-4377-a25e-7f1b74dc48bd/volumes" Mar 10 20:44:10 crc kubenswrapper[4861]: I0310 20:44:10.958230 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:44:10 crc kubenswrapper[4861]: E0310 20:44:10.959262 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:44:21 crc kubenswrapper[4861]: I0310 20:44:21.972384 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:44:21 crc kubenswrapper[4861]: E0310 20:44:21.973458 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:44:34 crc kubenswrapper[4861]: I0310 20:44:34.958328 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:44:34 crc kubenswrapper[4861]: E0310 20:44:34.959397 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:44:46 crc kubenswrapper[4861]: I0310 20:44:46.968062 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:44:46 crc kubenswrapper[4861]: E0310 20:44:46.969255 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:44:58 crc kubenswrapper[4861]: I0310 20:44:58.958275 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:44:58 crc kubenswrapper[4861]: E0310 20:44:58.959619 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.158378 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj"] Mar 10 20:45:00 crc kubenswrapper[4861]: E0310 20:45:00.158715 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0" containerName="oc" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.158729 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0" containerName="oc" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.158907 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e0ded7-2a3d-450a-83a0-8a1fdd77bfa0" containerName="oc" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.159404 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.161927 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.162437 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.181124 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj"] Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.254763 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.255190 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.255247 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26w6l\" (UniqueName: \"kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.357264 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.357367 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26w6l\" (UniqueName: \"kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.357493 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.359004 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.369856 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.380187 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26w6l\" (UniqueName: \"kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l\") pod \"collect-profiles-29552925-hssmj\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.492136 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.545522 4861 scope.go:117] "RemoveContainer" containerID="c8debb7d03681141f99aaaa1ffb9353dc4bf5566700cea157eac79abfec6797c" Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.788537 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj"] Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.988921 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" event={"ID":"95666075-f295-4cd8-88ee-0deabc20f442","Type":"ContainerStarted","Data":"0fed0ecb75c462c03fdd0018751d7fd2957803bc39c4e3fba892b6fa67f8e909"} Mar 10 20:45:00 crc kubenswrapper[4861]: I0310 20:45:00.988981 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" event={"ID":"95666075-f295-4cd8-88ee-0deabc20f442","Type":"ContainerStarted","Data":"81a95f53cfc257f5f4bac9d933fada8fc315f7085f2f4793c6deb74a5f62eb3c"} Mar 10 20:45:02 crc kubenswrapper[4861]: I0310 20:45:02.001512 4861 generic.go:334] "Generic (PLEG): container finished" podID="95666075-f295-4cd8-88ee-0deabc20f442" containerID="0fed0ecb75c462c03fdd0018751d7fd2957803bc39c4e3fba892b6fa67f8e909" exitCode=0 Mar 10 20:45:02 crc kubenswrapper[4861]: I0310 20:45:02.001943 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" event={"ID":"95666075-f295-4cd8-88ee-0deabc20f442","Type":"ContainerDied","Data":"0fed0ecb75c462c03fdd0018751d7fd2957803bc39c4e3fba892b6fa67f8e909"} Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.444667 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.519453 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume\") pod \"95666075-f295-4cd8-88ee-0deabc20f442\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.519571 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26w6l\" (UniqueName: \"kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l\") pod \"95666075-f295-4cd8-88ee-0deabc20f442\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.519619 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume\") pod \"95666075-f295-4cd8-88ee-0deabc20f442\" (UID: \"95666075-f295-4cd8-88ee-0deabc20f442\") " Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.522884 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume" (OuterVolumeSpecName: "config-volume") pod "95666075-f295-4cd8-88ee-0deabc20f442" (UID: "95666075-f295-4cd8-88ee-0deabc20f442"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.539442 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l" (OuterVolumeSpecName: "kube-api-access-26w6l") pod "95666075-f295-4cd8-88ee-0deabc20f442" (UID: "95666075-f295-4cd8-88ee-0deabc20f442"). InnerVolumeSpecName "kube-api-access-26w6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.547945 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95666075-f295-4cd8-88ee-0deabc20f442" (UID: "95666075-f295-4cd8-88ee-0deabc20f442"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.623015 4861 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95666075-f295-4cd8-88ee-0deabc20f442-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.623083 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26w6l\" (UniqueName: \"kubernetes.io/projected/95666075-f295-4cd8-88ee-0deabc20f442-kube-api-access-26w6l\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:03 crc kubenswrapper[4861]: I0310 20:45:03.623106 4861 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95666075-f295-4cd8-88ee-0deabc20f442-config-volume\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.024640 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" event={"ID":"95666075-f295-4cd8-88ee-0deabc20f442","Type":"ContainerDied","Data":"81a95f53cfc257f5f4bac9d933fada8fc315f7085f2f4793c6deb74a5f62eb3c"} Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.024703 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a95f53cfc257f5f4bac9d933fada8fc315f7085f2f4793c6deb74a5f62eb3c" Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.024854 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29552925-hssmj" Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.542808 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff"] Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.554758 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29552880-8xhff"] Mar 10 20:45:04 crc kubenswrapper[4861]: I0310 20:45:04.975950 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c2d30c-48a2-452c-b8f7-ba3fa176c801" path="/var/lib/kubelet/pods/96c2d30c-48a2-452c-b8f7-ba3fa176c801/volumes" Mar 10 20:45:06 crc kubenswrapper[4861]: I0310 20:45:06.050795 4861 generic.go:334] "Generic (PLEG): container finished" podID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerID="a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec" exitCode=0 Mar 10 20:45:06 crc kubenswrapper[4861]: I0310 20:45:06.050870 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-snn9m/must-gather-tkvvt" event={"ID":"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea","Type":"ContainerDied","Data":"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec"} Mar 10 20:45:06 crc kubenswrapper[4861]: I0310 20:45:06.051883 4861 scope.go:117] "RemoveContainer" containerID="a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec" Mar 10 20:45:06 crc kubenswrapper[4861]: I0310 20:45:06.497430 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snn9m_must-gather-tkvvt_0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea/gather/0.log" Mar 10 20:45:09 crc kubenswrapper[4861]: I0310 20:45:09.959090 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:45:09 crc kubenswrapper[4861]: E0310 20:45:09.959957 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:45:13 crc kubenswrapper[4861]: I0310 20:45:13.611485 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-snn9m/must-gather-tkvvt"] Mar 10 20:45:13 crc kubenswrapper[4861]: I0310 20:45:13.612443 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-snn9m/must-gather-tkvvt" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="copy" containerID="cri-o://c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9" gracePeriod=2 Mar 10 20:45:13 crc kubenswrapper[4861]: I0310 20:45:13.623036 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-snn9m/must-gather-tkvvt"] Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.083408 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snn9m_must-gather-tkvvt_0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea/copy/0.log" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.083983 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.136786 4861 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-snn9m_must-gather-tkvvt_0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea/copy/0.log" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.137366 4861 generic.go:334] "Generic (PLEG): container finished" podID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerID="c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9" exitCode=143 Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.137459 4861 scope.go:117] "RemoveContainer" containerID="c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.137486 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-snn9m/must-gather-tkvvt" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.146720 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output\") pod \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.146900 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsslk\" (UniqueName: \"kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk\") pod \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\" (UID: \"0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea\") " Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.155988 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk" (OuterVolumeSpecName: "kube-api-access-tsslk") pod "0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" (UID: "0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea"). InnerVolumeSpecName "kube-api-access-tsslk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.170838 4861 scope.go:117] "RemoveContainer" containerID="a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.249609 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsslk\" (UniqueName: \"kubernetes.io/projected/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-kube-api-access-tsslk\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.285145 4861 scope.go:117] "RemoveContainer" containerID="c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9" Mar 10 20:45:14 crc kubenswrapper[4861]: E0310 20:45:14.285892 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9\": container with ID starting with c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9 not found: ID does not exist" containerID="c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.285945 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9"} err="failed to get container status \"c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9\": rpc error: code = NotFound desc = could not find container \"c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9\": container with ID starting with c74d36d68c5d4a58b275b70c63d19fa04db4df7379da7f4e21999dddd18753f9 not found: ID does not exist" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.285981 4861 scope.go:117] "RemoveContainer" containerID="a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec" Mar 10 20:45:14 crc kubenswrapper[4861]: E0310 20:45:14.287339 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec\": container with ID starting with a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec not found: ID does not exist" containerID="a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.287384 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec"} err="failed to get container status \"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec\": rpc error: code = NotFound desc = could not find container \"a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec\": container with ID starting with a5c91374c59a0f59cfca255aa5bdc7ea7af092b1d9ab64fac2ba804346c378ec not found: ID does not exist" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.294599 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" (UID: "0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.351893 4861 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:14 crc kubenswrapper[4861]: I0310 20:45:14.973343 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" path="/var/lib/kubelet/pods/0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea/volumes" Mar 10 20:45:23 crc kubenswrapper[4861]: I0310 20:45:23.958636 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:45:23 crc kubenswrapper[4861]: E0310 20:45:23.960097 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078079 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:27 crc kubenswrapper[4861]: E0310 20:45:27.078690 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="copy" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078720 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="copy" Mar 10 20:45:27 crc kubenswrapper[4861]: E0310 20:45:27.078753 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95666075-f295-4cd8-88ee-0deabc20f442" containerName="collect-profiles" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078761 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="95666075-f295-4cd8-88ee-0deabc20f442" containerName="collect-profiles" Mar 10 20:45:27 crc kubenswrapper[4861]: E0310 20:45:27.078786 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="gather" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078794 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="gather" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078966 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="gather" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.078986 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="95666075-f295-4cd8-88ee-0deabc20f442" containerName="collect-profiles" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.079001 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b079180-4407-4d2c-9b5a-fdd7ce9ca6ea" containerName="copy" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.080356 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.100921 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.203776 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.204160 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.204243 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6vc\" (UniqueName: \"kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.305461 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6vc\" (UniqueName: \"kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.305573 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.305626 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.306279 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.306288 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.326002 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6vc\" (UniqueName: \"kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc\") pod \"redhat-marketplace-2f67j\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.438177 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:27 crc kubenswrapper[4861]: I0310 20:45:27.709762 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:28 crc kubenswrapper[4861]: I0310 20:45:28.255084 4861 generic.go:334] "Generic (PLEG): container finished" podID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerID="80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d" exitCode=0 Mar 10 20:45:28 crc kubenswrapper[4861]: I0310 20:45:28.255143 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerDied","Data":"80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d"} Mar 10 20:45:28 crc kubenswrapper[4861]: I0310 20:45:28.255178 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerStarted","Data":"e053e14ea34f81441f5deba04728dafc36159b96c559f9aed6efbdc3e55ea6ba"} Mar 10 20:45:29 crc kubenswrapper[4861]: I0310 20:45:29.266920 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerStarted","Data":"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f"} Mar 10 20:45:30 crc kubenswrapper[4861]: I0310 20:45:30.277213 4861 generic.go:334] "Generic (PLEG): container finished" podID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerID="2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f" exitCode=0 Mar 10 20:45:30 crc kubenswrapper[4861]: I0310 20:45:30.277516 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerDied","Data":"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f"} Mar 10 20:45:31 crc kubenswrapper[4861]: I0310 20:45:31.293523 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerStarted","Data":"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31"} Mar 10 20:45:31 crc kubenswrapper[4861]: I0310 20:45:31.330098 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2f67j" podStartSLOduration=1.800643623 podStartE2EDuration="4.330075165s" podCreationTimestamp="2026-03-10 20:45:27 +0000 UTC" firstStartedPulling="2026-03-10 20:45:28.257213538 +0000 UTC m=+7072.020649498" lastFinishedPulling="2026-03-10 20:45:30.78664504 +0000 UTC m=+7074.550081040" observedRunningTime="2026-03-10 20:45:31.327405823 +0000 UTC m=+7075.090841853" watchObservedRunningTime="2026-03-10 20:45:31.330075165 +0000 UTC m=+7075.093511175" Mar 10 20:45:34 crc kubenswrapper[4861]: I0310 20:45:34.958625 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:45:34 crc kubenswrapper[4861]: E0310 20:45:34.959643 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:45:37 crc kubenswrapper[4861]: I0310 20:45:37.439517 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:37 crc kubenswrapper[4861]: I0310 20:45:37.440515 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:37 crc kubenswrapper[4861]: I0310 20:45:37.530886 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:38 crc kubenswrapper[4861]: I0310 20:45:38.455423 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:38 crc kubenswrapper[4861]: I0310 20:45:38.523447 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:40 crc kubenswrapper[4861]: I0310 20:45:40.393301 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2f67j" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="registry-server" containerID="cri-o://f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31" gracePeriod=2 Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.078639 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.223941 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content\") pod \"d813d923-c445-43cc-8e1d-c87be559f1fe\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.224324 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6vc\" (UniqueName: \"kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc\") pod \"d813d923-c445-43cc-8e1d-c87be559f1fe\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.224356 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities\") pod \"d813d923-c445-43cc-8e1d-c87be559f1fe\" (UID: \"d813d923-c445-43cc-8e1d-c87be559f1fe\") " Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.225279 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities" (OuterVolumeSpecName: "utilities") pod "d813d923-c445-43cc-8e1d-c87be559f1fe" (UID: "d813d923-c445-43cc-8e1d-c87be559f1fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.235652 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc" (OuterVolumeSpecName: "kube-api-access-dz6vc") pod "d813d923-c445-43cc-8e1d-c87be559f1fe" (UID: "d813d923-c445-43cc-8e1d-c87be559f1fe"). InnerVolumeSpecName "kube-api-access-dz6vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.277593 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d813d923-c445-43cc-8e1d-c87be559f1fe" (UID: "d813d923-c445-43cc-8e1d-c87be559f1fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.327300 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6vc\" (UniqueName: \"kubernetes.io/projected/d813d923-c445-43cc-8e1d-c87be559f1fe-kube-api-access-dz6vc\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.327508 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.327520 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d813d923-c445-43cc-8e1d-c87be559f1fe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.405036 4861 generic.go:334] "Generic (PLEG): container finished" podID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerID="f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31" exitCode=0 Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.405107 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerDied","Data":"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31"} Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.405172 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f67j" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.405206 4861 scope.go:117] "RemoveContainer" containerID="f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.405187 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f67j" event={"ID":"d813d923-c445-43cc-8e1d-c87be559f1fe","Type":"ContainerDied","Data":"e053e14ea34f81441f5deba04728dafc36159b96c559f9aed6efbdc3e55ea6ba"} Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.440067 4861 scope.go:117] "RemoveContainer" containerID="2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.467732 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.474651 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f67j"] Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.485809 4861 scope.go:117] "RemoveContainer" containerID="80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.524626 4861 scope.go:117] "RemoveContainer" containerID="f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31" Mar 10 20:45:41 crc kubenswrapper[4861]: E0310 20:45:41.525307 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31\": container with ID starting with f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31 not found: ID does not exist" containerID="f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.525357 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31"} err="failed to get container status \"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31\": rpc error: code = NotFound desc = could not find container \"f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31\": container with ID starting with f35f7dd5ed04a2cfafbfd8f81afa6d23278fb6f09cb7da56499a773e5794fc31 not found: ID does not exist" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.525388 4861 scope.go:117] "RemoveContainer" containerID="2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f" Mar 10 20:45:41 crc kubenswrapper[4861]: E0310 20:45:41.526211 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f\": container with ID starting with 2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f not found: ID does not exist" containerID="2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.526238 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f"} err="failed to get container status \"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f\": rpc error: code = NotFound desc = could not find container \"2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f\": container with ID starting with 2f455afda78d3ee1ce419306a023c0df4ea6bb109dbe479bce374f465ffb417f not found: ID does not exist" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.526256 4861 scope.go:117] "RemoveContainer" containerID="80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d" Mar 10 20:45:41 crc kubenswrapper[4861]: E0310 20:45:41.526847 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d\": container with ID starting with 80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d not found: ID does not exist" containerID="80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d" Mar 10 20:45:41 crc kubenswrapper[4861]: I0310 20:45:41.526931 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d"} err="failed to get container status \"80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d\": rpc error: code = NotFound desc = could not find container \"80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d\": container with ID starting with 80a4e42a9150f4d0502993c53c444e07cbe911416cbd8bfd5b6e85c3a606035d not found: ID does not exist" Mar 10 20:45:42 crc kubenswrapper[4861]: I0310 20:45:42.969505 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" path="/var/lib/kubelet/pods/d813d923-c445-43cc-8e1d-c87be559f1fe/volumes" Mar 10 20:45:47 crc kubenswrapper[4861]: I0310 20:45:47.958837 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:45:47 crc kubenswrapper[4861]: E0310 20:45:47.960088 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.167529 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552926-86skn"] Mar 10 20:46:00 crc kubenswrapper[4861]: E0310 20:46:00.168603 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="extract-content" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.168625 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="extract-content" Mar 10 20:46:00 crc kubenswrapper[4861]: E0310 20:46:00.168668 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="extract-utilities" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.168684 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="extract-utilities" Mar 10 20:46:00 crc kubenswrapper[4861]: E0310 20:46:00.168738 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="registry-server" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.168752 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="registry-server" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.169021 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="d813d923-c445-43cc-8e1d-c87be559f1fe" containerName="registry-server" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.169889 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.172759 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.175544 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.177085 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.185759 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552926-86skn"] Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.261413 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5lcc\" (UniqueName: \"kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc\") pod \"auto-csr-approver-29552926-86skn\" (UID: \"4b3b85d1-ab72-494d-8542-ea34d73f3b95\") " pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.363469 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5lcc\" (UniqueName: \"kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc\") pod \"auto-csr-approver-29552926-86skn\" (UID: \"4b3b85d1-ab72-494d-8542-ea34d73f3b95\") " pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.396962 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5lcc\" (UniqueName: \"kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc\") pod \"auto-csr-approver-29552926-86skn\" (UID: \"4b3b85d1-ab72-494d-8542-ea34d73f3b95\") " pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.509921 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:00 crc kubenswrapper[4861]: I0310 20:46:00.704506 4861 scope.go:117] "RemoveContainer" containerID="58c4294bd20f1967ce0c8720bb18ba90ed830954a7abd58ef87ab3c03557c172" Mar 10 20:46:01 crc kubenswrapper[4861]: I0310 20:46:01.038543 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552926-86skn"] Mar 10 20:46:01 crc kubenswrapper[4861]: W0310 20:46:01.047113 4861 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b3b85d1_ab72_494d_8542_ea34d73f3b95.slice/crio-33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a WatchSource:0}: Error finding container 33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a: Status 404 returned error can't find the container with id 33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a Mar 10 20:46:01 crc kubenswrapper[4861]: I0310 20:46:01.669578 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552926-86skn" event={"ID":"4b3b85d1-ab72-494d-8542-ea34d73f3b95","Type":"ContainerStarted","Data":"33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a"} Mar 10 20:46:01 crc kubenswrapper[4861]: I0310 20:46:01.959188 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:46:01 crc kubenswrapper[4861]: E0310 20:46:01.959593 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:46:02 crc kubenswrapper[4861]: I0310 20:46:02.681626 4861 generic.go:334] "Generic (PLEG): container finished" podID="4b3b85d1-ab72-494d-8542-ea34d73f3b95" containerID="abb7543f2240ba647e64b8528a3d010f861be34724fb50d6e9a1d523f6e89ae5" exitCode=0 Mar 10 20:46:02 crc kubenswrapper[4861]: I0310 20:46:02.682087 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552926-86skn" event={"ID":"4b3b85d1-ab72-494d-8542-ea34d73f3b95","Type":"ContainerDied","Data":"abb7543f2240ba647e64b8528a3d010f861be34724fb50d6e9a1d523f6e89ae5"} Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.134065 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.260647 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5lcc\" (UniqueName: \"kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc\") pod \"4b3b85d1-ab72-494d-8542-ea34d73f3b95\" (UID: \"4b3b85d1-ab72-494d-8542-ea34d73f3b95\") " Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.273304 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc" (OuterVolumeSpecName: "kube-api-access-l5lcc") pod "4b3b85d1-ab72-494d-8542-ea34d73f3b95" (UID: "4b3b85d1-ab72-494d-8542-ea34d73f3b95"). InnerVolumeSpecName "kube-api-access-l5lcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.364953 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5lcc\" (UniqueName: \"kubernetes.io/projected/4b3b85d1-ab72-494d-8542-ea34d73f3b95-kube-api-access-l5lcc\") on node \"crc\" DevicePath \"\"" Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.716285 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552926-86skn" event={"ID":"4b3b85d1-ab72-494d-8542-ea34d73f3b95","Type":"ContainerDied","Data":"33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a"} Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.717784 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33accaabe87e50331438e36d66ad1c05dcc54e225f646e6f32758c40dd1cbb8a" Mar 10 20:46:04 crc kubenswrapper[4861]: I0310 20:46:04.718068 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552926-86skn" Mar 10 20:46:05 crc kubenswrapper[4861]: I0310 20:46:05.235509 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552920-69pcx"] Mar 10 20:46:05 crc kubenswrapper[4861]: I0310 20:46:05.247262 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552920-69pcx"] Mar 10 20:46:06 crc kubenswrapper[4861]: I0310 20:46:06.973356 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b33758a-4a8f-4ead-88e5-e2236394b7ff" path="/var/lib/kubelet/pods/4b33758a-4a8f-4ead-88e5-e2236394b7ff/volumes" Mar 10 20:46:14 crc kubenswrapper[4861]: I0310 20:46:14.960150 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:46:14 crc kubenswrapper[4861]: E0310 20:46:14.962814 4861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qttbr_openshift-machine-config-operator(771189c2-452d-4204-a0b7-abfe9ba62bd0)\"" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.634495 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:25 crc kubenswrapper[4861]: E0310 20:46:25.635683 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3b85d1-ab72-494d-8542-ea34d73f3b95" containerName="oc" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.635799 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3b85d1-ab72-494d-8542-ea34d73f3b95" containerName="oc" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.636224 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3b85d1-ab72-494d-8542-ea34d73f3b95" containerName="oc" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.638292 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.652047 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.831156 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.831457 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbhdz\" (UniqueName: \"kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.831685 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.933262 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbhdz\" (UniqueName: \"kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.933371 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.933517 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.934271 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.946037 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:25 crc kubenswrapper[4861]: I0310 20:46:25.973216 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbhdz\" (UniqueName: \"kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz\") pod \"redhat-operators-rsqcx\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:26 crc kubenswrapper[4861]: I0310 20:46:26.013596 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:26 crc kubenswrapper[4861]: I0310 20:46:26.485433 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:26 crc kubenswrapper[4861]: I0310 20:46:26.936275 4861 generic.go:334] "Generic (PLEG): container finished" podID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerID="a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147" exitCode=0 Mar 10 20:46:26 crc kubenswrapper[4861]: I0310 20:46:26.936407 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerDied","Data":"a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147"} Mar 10 20:46:26 crc kubenswrapper[4861]: I0310 20:46:26.936661 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerStarted","Data":"4321ba10466884ecd9c9782905627db3ed0b0d5d6303c1eebff31ce779dca322"} Mar 10 20:46:27 crc kubenswrapper[4861]: I0310 20:46:27.958243 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerStarted","Data":"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97"} Mar 10 20:46:28 crc kubenswrapper[4861]: I0310 20:46:28.959365 4861 scope.go:117] "RemoveContainer" containerID="5e0a04df5740c120052e23680da7c45126dec6674fa679c211d70be3fb442ea4" Mar 10 20:46:28 crc kubenswrapper[4861]: I0310 20:46:28.975153 4861 generic.go:334] "Generic (PLEG): container finished" podID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerID="5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97" exitCode=0 Mar 10 20:46:28 crc kubenswrapper[4861]: I0310 20:46:28.979205 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerDied","Data":"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97"} Mar 10 20:46:29 crc kubenswrapper[4861]: I0310 20:46:29.992218 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerStarted","Data":"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4"} Mar 10 20:46:30 crc kubenswrapper[4861]: I0310 20:46:30.000214 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" event={"ID":"771189c2-452d-4204-a0b7-abfe9ba62bd0","Type":"ContainerStarted","Data":"e5c4da81ed322b8fa5fc45bb64ff13a6fef2665439626d00bbb4e154c5f4f1ac"} Mar 10 20:46:30 crc kubenswrapper[4861]: I0310 20:46:30.039237 4861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rsqcx" podStartSLOduration=2.599771552 podStartE2EDuration="5.039217892s" podCreationTimestamp="2026-03-10 20:46:25 +0000 UTC" firstStartedPulling="2026-03-10 20:46:26.938898524 +0000 UTC m=+7130.702334514" lastFinishedPulling="2026-03-10 20:46:29.378344864 +0000 UTC m=+7133.141780854" observedRunningTime="2026-03-10 20:46:30.015256754 +0000 UTC m=+7133.778692744" watchObservedRunningTime="2026-03-10 20:46:30.039217892 +0000 UTC m=+7133.802653862" Mar 10 20:46:36 crc kubenswrapper[4861]: I0310 20:46:36.014405 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:36 crc kubenswrapper[4861]: I0310 20:46:36.015306 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:37 crc kubenswrapper[4861]: I0310 20:46:37.063725 4861 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rsqcx" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="registry-server" probeResult="failure" output=< Mar 10 20:46:37 crc kubenswrapper[4861]: timeout: failed to connect service ":50051" within 1s Mar 10 20:46:37 crc kubenswrapper[4861]: > Mar 10 20:46:46 crc kubenswrapper[4861]: I0310 20:46:46.111443 4861 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:46 crc kubenswrapper[4861]: I0310 20:46:46.190523 4861 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.045546 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.178484 4861 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rsqcx" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="registry-server" containerID="cri-o://ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4" gracePeriod=2 Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.709743 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.885198 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities\") pod \"c73e8983-bdfa-4102-a07e-a97eee54f401\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.885436 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbhdz\" (UniqueName: \"kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz\") pod \"c73e8983-bdfa-4102-a07e-a97eee54f401\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.885496 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content\") pod \"c73e8983-bdfa-4102-a07e-a97eee54f401\" (UID: \"c73e8983-bdfa-4102-a07e-a97eee54f401\") " Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.886582 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities" (OuterVolumeSpecName: "utilities") pod "c73e8983-bdfa-4102-a07e-a97eee54f401" (UID: "c73e8983-bdfa-4102-a07e-a97eee54f401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.897602 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz" (OuterVolumeSpecName: "kube-api-access-lbhdz") pod "c73e8983-bdfa-4102-a07e-a97eee54f401" (UID: "c73e8983-bdfa-4102-a07e-a97eee54f401"). InnerVolumeSpecName "kube-api-access-lbhdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.988174 4861 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-utilities\") on node \"crc\" DevicePath \"\"" Mar 10 20:46:47 crc kubenswrapper[4861]: I0310 20:46:47.988237 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbhdz\" (UniqueName: \"kubernetes.io/projected/c73e8983-bdfa-4102-a07e-a97eee54f401-kube-api-access-lbhdz\") on node \"crc\" DevicePath \"\"" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.049827 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c73e8983-bdfa-4102-a07e-a97eee54f401" (UID: "c73e8983-bdfa-4102-a07e-a97eee54f401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.089775 4861 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73e8983-bdfa-4102-a07e-a97eee54f401-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.189481 4861 generic.go:334] "Generic (PLEG): container finished" podID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerID="ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4" exitCode=0 Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.189522 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerDied","Data":"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4"} Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.189549 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rsqcx" event={"ID":"c73e8983-bdfa-4102-a07e-a97eee54f401","Type":"ContainerDied","Data":"4321ba10466884ecd9c9782905627db3ed0b0d5d6303c1eebff31ce779dca322"} Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.189566 4861 scope.go:117] "RemoveContainer" containerID="ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.189695 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rsqcx" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.236675 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.236974 4861 scope.go:117] "RemoveContainer" containerID="5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.246123 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rsqcx"] Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.279242 4861 scope.go:117] "RemoveContainer" containerID="a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.304154 4861 scope.go:117] "RemoveContainer" containerID="ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4" Mar 10 20:46:48 crc kubenswrapper[4861]: E0310 20:46:48.304494 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4\": container with ID starting with ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4 not found: ID does not exist" containerID="ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.304524 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4"} err="failed to get container status \"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4\": rpc error: code = NotFound desc = could not find container \"ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4\": container with ID starting with ba4e1275579db2146a112a4b3a6e0442748df7692323cdeb9dbc6be75efaafb4 not found: ID does not exist" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.304547 4861 scope.go:117] "RemoveContainer" containerID="5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97" Mar 10 20:46:48 crc kubenswrapper[4861]: E0310 20:46:48.305163 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97\": container with ID starting with 5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97 not found: ID does not exist" containerID="5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.305227 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97"} err="failed to get container status \"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97\": rpc error: code = NotFound desc = could not find container \"5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97\": container with ID starting with 5ae87a7e5d4459d68d3ab8557e1dbee79ad4fe31aec7bc7be327deb3eeba7f97 not found: ID does not exist" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.305269 4861 scope.go:117] "RemoveContainer" containerID="a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147" Mar 10 20:46:48 crc kubenswrapper[4861]: E0310 20:46:48.305762 4861 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147\": container with ID starting with a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147 not found: ID does not exist" containerID="a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.305820 4861 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147"} err="failed to get container status \"a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147\": rpc error: code = NotFound desc = could not find container \"a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147\": container with ID starting with a4dbeb1c202bf7376b9c09335c10a695751c9b4c4dd9f6e3363d26a934035147 not found: ID does not exist" Mar 10 20:46:48 crc kubenswrapper[4861]: E0310 20:46:48.347340 4861 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73e8983_bdfa_4102_a07e_a97eee54f401.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73e8983_bdfa_4102_a07e_a97eee54f401.slice/crio-4321ba10466884ecd9c9782905627db3ed0b0d5d6303c1eebff31ce779dca322\": RecentStats: unable to find data in memory cache]" Mar 10 20:46:48 crc kubenswrapper[4861]: I0310 20:46:48.970106 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" path="/var/lib/kubelet/pods/c73e8983-bdfa-4102-a07e-a97eee54f401/volumes" Mar 10 20:47:00 crc kubenswrapper[4861]: I0310 20:47:00.797830 4861 scope.go:117] "RemoveContainer" containerID="10315fd671b22a87ce6484fdcaa32763ecfd077b095fd8e76033b69aceb2f097" Mar 10 20:47:00 crc kubenswrapper[4861]: I0310 20:47:00.861671 4861 scope.go:117] "RemoveContainer" containerID="6c05aab7c7eed6e6c5fec874ac52855244cb5d6e99c945cc08b21df674e75237" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.152304 4861 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29552928-2plgd"] Mar 10 20:48:00 crc kubenswrapper[4861]: E0310 20:48:00.153549 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="extract-utilities" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.153575 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="extract-utilities" Mar 10 20:48:00 crc kubenswrapper[4861]: E0310 20:48:00.153618 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="registry-server" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.153631 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="registry-server" Mar 10 20:48:00 crc kubenswrapper[4861]: E0310 20:48:00.153678 4861 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="extract-content" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.153691 4861 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="extract-content" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.154067 4861 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73e8983-bdfa-4102-a07e-a97eee54f401" containerName="registry-server" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.154945 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.160539 4861 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-gfbj2" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.161593 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.162049 4861 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.167306 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552928-2plgd"] Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.235931 4861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhm2f\" (UniqueName: \"kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f\") pod \"auto-csr-approver-29552928-2plgd\" (UID: \"270c6c50-985a-4870-842f-7f49484a469f\") " pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.337463 4861 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhm2f\" (UniqueName: \"kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f\") pod \"auto-csr-approver-29552928-2plgd\" (UID: \"270c6c50-985a-4870-842f-7f49484a469f\") " pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.368738 4861 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhm2f\" (UniqueName: \"kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f\") pod \"auto-csr-approver-29552928-2plgd\" (UID: \"270c6c50-985a-4870-842f-7f49484a469f\") " pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.495793 4861 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.786673 4861 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29552928-2plgd"] Mar 10 20:48:00 crc kubenswrapper[4861]: I0310 20:48:00.892494 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552928-2plgd" event={"ID":"270c6c50-985a-4870-842f-7f49484a469f","Type":"ContainerStarted","Data":"540e339b7ef8d0762ca78b2c0ebcf7422944319877c2aad97b893c14f0df7807"} Mar 10 20:48:02 crc kubenswrapper[4861]: I0310 20:48:02.920752 4861 generic.go:334] "Generic (PLEG): container finished" podID="270c6c50-985a-4870-842f-7f49484a469f" containerID="478a9a11191e7ffd06b7b4457d88f4c70899124c9881b009dba5bbfebd635f16" exitCode=0 Mar 10 20:48:02 crc kubenswrapper[4861]: I0310 20:48:02.920875 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552928-2plgd" event={"ID":"270c6c50-985a-4870-842f-7f49484a469f","Type":"ContainerDied","Data":"478a9a11191e7ffd06b7b4457d88f4c70899124c9881b009dba5bbfebd635f16"} Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.396009 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.538639 4861 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhm2f\" (UniqueName: \"kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f\") pod \"270c6c50-985a-4870-842f-7f49484a469f\" (UID: \"270c6c50-985a-4870-842f-7f49484a469f\") " Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.543829 4861 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f" (OuterVolumeSpecName: "kube-api-access-zhm2f") pod "270c6c50-985a-4870-842f-7f49484a469f" (UID: "270c6c50-985a-4870-842f-7f49484a469f"). InnerVolumeSpecName "kube-api-access-zhm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.640546 4861 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhm2f\" (UniqueName: \"kubernetes.io/projected/270c6c50-985a-4870-842f-7f49484a469f-kube-api-access-zhm2f\") on node \"crc\" DevicePath \"\"" Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.944651 4861 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29552928-2plgd" event={"ID":"270c6c50-985a-4870-842f-7f49484a469f","Type":"ContainerDied","Data":"540e339b7ef8d0762ca78b2c0ebcf7422944319877c2aad97b893c14f0df7807"} Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.944752 4861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540e339b7ef8d0762ca78b2c0ebcf7422944319877c2aad97b893c14f0df7807" Mar 10 20:48:04 crc kubenswrapper[4861]: I0310 20:48:04.944774 4861 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29552928-2plgd" Mar 10 20:48:05 crc kubenswrapper[4861]: I0310 20:48:05.494869 4861 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29552922-bg285"] Mar 10 20:48:05 crc kubenswrapper[4861]: I0310 20:48:05.505262 4861 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29552922-bg285"] Mar 10 20:48:06 crc kubenswrapper[4861]: I0310 20:48:06.975400 4861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd75834-2331-46e3-b1d2-4c393f5a45d2" path="/var/lib/kubelet/pods/dcd75834-2331-46e3-b1d2-4c393f5a45d2/volumes" Mar 10 20:48:51 crc kubenswrapper[4861]: I0310 20:48:51.991976 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:48:51 crc kubenswrapper[4861]: I0310 20:48:51.992591 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 10 20:49:00 crc kubenswrapper[4861]: I0310 20:49:00.988348 4861 scope.go:117] "RemoveContainer" containerID="d8edfa3a7dfe23a22ca6be4b0db2bb9b74247dd7c0f38b59de2da9d087721955" Mar 10 20:49:21 crc kubenswrapper[4861]: I0310 20:49:21.992532 4861 patch_prober.go:28] interesting pod/machine-config-daemon-qttbr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 10 20:49:21 crc kubenswrapper[4861]: I0310 20:49:21.993132 4861 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qttbr" podUID="771189c2-452d-4204-a0b7-abfe9ba62bd0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"